Texture Arrays in Unity

Recently I messed around with Texture Arrays as alternative for Texture Atlases.

I’ve heard of this feature before but never really touched it, and still find a lot of people doing texture atlassing. So here’s my two cents at making the internet have more mentions of texture arrays!

Why combine any textures?
Because when we have one model with one material it is cheaper to draw than many models with many different materials (we must split up models per-material at least).

So for every material we have a model and for every model we have to do a draw call. Then again for each shadow map / cascade, and so on. This amplifies the draw calls per mesh greatly, large numbers of draw calls make us slow, the CPU is communicating with the GPU a lot, we don’t want this.

We ideally just want 1 mesh to draw, although at some point we have to cut it up into chunks for level of detail, frustum and occlusion culling to reduce GPU load, but then we are doing draw calls to improve performance, not lose it!

The problem with Atlases
When you create a texture atlas, you may put several textures into one, so that one material and one mesh can be created.

Without proper tooling an artist may manually have to combine meshes, combine textures and move texture coordinates to be in the right part of the atlas. It also limits texture coordinates to be in 0 to 1 range. Big texture coordinates to introduce tiling would now look at different textures in the atlas.

Then there is a problem with mip mapping. If we naively mip map non-square textures, we can get a lot of bleeding between the individual textures. If all our textures are the same resolution, and a tool mip maps before atlassing, we can mitigate this issue somewhat.

Then we just have the problem of (tri)linear interpolation bleeding across borders. If texture coordinates touch the edge of a texture in the atlas, the pixel starts being interpolated with the adjacent pixel.

We can again mitigate this by moving our texture coordinates 1 pixel away from the texture borders, but as mip levels increase the resolution decreases, so to do this without issues we must consider the highest mip level and leave a border as big as the texture. That space waste is too much.

So, mip-mapping and texture atlassing are not exactly good friends.

Introducing texture Arrays
Texture arrays are just a list of textures. This allows us to combine e.g. the color maps of a bunch of materials into one array, so that one material can just use the array instead of having multiple materials. The limitation being that all textures must be of the same size and internal format.

It brings back the ability to use mip mapping, and has all the other benefits of atlassing (less materials leading to less draw calls).

The good news is that all an artist needs to do is assign a per-vertex attribute to identify what texture to use (if you have a vertex color multiplier, consider sacrificing it’s alpha channel; add a w component to your normal, whatever works).

The bad news is that we need to do some tooling to make this work at all (there is no real manual way for an artist to create a texture array and existing shaders will not support them).

There is a risk of pushing too many textures into one array, it’ll become hard to debug memory if there are many textures of unused assets mixed with data that we need to load. Matching used vertex attributes with texture array size could help analyze unused entries.

I did some of this in Unity while experimenting how viable a solution this was. The code is not really polished and I didn’t use editor scripts (because I could avoid doing UI with an ExecuteInEditMode component) but I’ll share it anyways!

This script can take a set of materials and write a given set of attributes to a folder as texture arrays (and material using texture arrays).

using System;
using System.Linq;
using System.IO;
using UnityEngine;
using UnityEditor;

/* Match material input names with their respective texture (array) settings. */
struct PropertyCombineSettings
    public string name; // material texture2D property to put in array
    public int width; // assume all materials have textures of this resolution
    public int height;
    public Color fallback; // if the property isn't used use this color ((0,0.5,0,0.5) for normals)
    public TextureFormat format; // assume all materials have textures of this format
    public bool linear; // are inputs linear? (true for normal maps)

public class MaterialArray : MonoBehaviour
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] Material[] inputs; // List of materials to push into texture array.
    [SerializeField] string outputPath; // Save created texture arrays in this folder.
    [SerializeField] PropertyCombineSettings[] properties; // Set of material inputs to process (and how).

    void Update()
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
        run = false;

        // Ensure we have a folder to write to
        string absPath = Path.GetFullPath(outputPath);
        if (!Directory.Exists(absPath))
            Debug.Log(String.Format("Path not found {0}", absPath));

        // Combine one property at a time
        Texture2DArray[] results = new Texture2DArray[properties.Length];
        for(int i = 0; i < properties.Length; ++i)
            // Delete existing texture arrays from disk as we can not alter them
            PropertyCombineSettings property = properties[i];
            string dst = outputPath + "/" + property.name + ".asset";
            if (File.Exists(dst))

            // Create new texture array (of right resolution and format) to write to
            Texture2DArray output = new Texture2DArray(property.width, property.height, inputs.Length, property.format, true, property.linear);
            results[i] = output;

            Texture2D fallback = null;
            int layerIndex = 0;
            // For each material process the property for this array
            foreach (Material input in inputs)
                Texture2D layer = input.GetTexture(property.name) as Texture2D;

                // If the material does not have a texture for this slot, fill the array with a flat color
                if (layer == null)
                    Debug.Log(String.Format("Skipping empty parameter {0} for material {1}", property.name, input));
                    if(fallback == null)
                        // Generate a fallback texture with a flat color of the right format and size
                        TextureFormat fmt = property.format;
                        if (fmt == TextureFormat.DXT1) // We can't write to compressed formats, use uncompressed version and then compress
                            fmt = TextureFormat.RGB24;
                        else if (fmt == TextureFormat.DXT5)
                            fmt = TextureFormat.RGBA32;
                        fallback = new Texture2D(property.width, property.height, fmt, true, property.linear);
                        fallback.SetPixels(Enumerable.Repeat(property.fallback, property.width * property.height).ToArray());
                        if (fmt != property.format) // Compress to final format if necessary
                            EditorUtility.CompressTexture(fallback, property.format, TextureCompressionQuality.Fast);
                    layer = fallback;

                // Validate input data
                if (layer.format != property.format)
                    Debug.LogError(String.Format("Format mismatch on {0} / {1}. Is {2}, must be {3}.", input, property.name, layer.format, property.format));
                    layerIndex += 1;

                if (layer.width != property.width || layer.height != property.height)
                    Debug.LogError(String.Format("Resolution mismatch on {0} / {1}", input, property.name));
                    layerIndex += 1;

                // Copy input texture into array
                Graphics.CopyTexture(layer, 0, output, layerIndex);
                layerIndex += 1;
            AssetDatabase.CreateAsset(output, dst);

        // Create or get a material and assign the texture arrays
        // Unity keeps losing connections when re-saving the texture arrays so this is my workaround to avoid manually allocating
        string mtlDst = outputPath + ".mat";
        Material mtl = AssetDatabase.LoadAssetAtPath<Material>(mtlDst);
        bool create = false;
        if(mtl == null)
            create = true;
            mtl = new Material(Shader.Find("Custom/NewShader"));

        for (int i = 0; i < properties.Length; ++i)
            PropertyCombineSettings property = properties[i];
            mtl.SetTexture(property.name, results[i]);

        if (create)
            AssetDatabase.CreateAsset(mtl, mtlDst);


This is a surface shader that mimics unity’s standard shader for a large part, but using texture arrays! It handles indexing by accessing uv2.y, it assumes uv2.x contains the actual uv2 as two float16 packed together.

Shader "Custom/NewShader" {
	Properties {
		_Color("Color", Color) = (1,1,1,1)
		_MainTex ("Albedo (RGB)", 2DArray) = "" {}

		// _Glossiness("Smoothness", Range(0.0, 1.0)) = 0.5
		// _GlossMapScale("Smoothness Scale", Range(0.0, 1.0)) = 1.0
		// [Enum(Metallic Alpha,0,Albedo Alpha,1)] _SmoothnessTextureChannel("Smoothness texture channel", Float) = 0
		_MetallicGlossMap("Metallic", 2DArray) = "" {}

		_BumpScale("Scale", Float) = 1.0
		[Normal] _BumpMap("Normal Map", 2DArray) = "" {}

		_Parallax("Height Scale", Range(0.005, 0.08)) = 0.02
		_ParallaxMap("Height Map", 2DArray) = "" {}

		_OcclusionStrength("Strength", Range(0.0, 1.0)) = 1.0
		_OcclusionMap("Occlusion", 2DArray) = "" {}
		// _EmissionColor("Color", Color) = (0,0,0)
		// _EmissionMap("Emission", 2D) = "white" {}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200

		// Physically based Standard lighting model, and enable shadows on all light types
		#pragma surface surf Standard fullforwardshadows

		// Use shader model 3.0 target, to get nicer looking lighting
		#pragma target 3.0

		fixed4 _Color;
		half _Metallic;
		half _BumpScale;
		half _Parallax;
		half _OcclusionStrength;

		// put more per-instance properties here

		struct Input
			float2 uv_MainTex;
			float2 uv2_BumpMap;
			float3 viewDir;

		// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
		// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
		// #pragma instancing_options assumeuniformscaling
		// put more per-instance properties here

		void surf (Input IN, inout SurfaceOutputStandard o) 
			uint xy = asuint(IN.uv2_BumpMap.x);
			uint mask = ((1 << 16) - 1);
			float2 uv2 = float2(asfloat(uint(xy & mask)),
								asfloat(uint((xy >> 16) & mask)));
			float textureIndex = IN.uv2_BumpMap.y;

			float2 offsetMainTex = ParallaxOffset(UNITY_SAMPLE_TEX2DARRAY(_ParallaxMap, float3(IN.uv_MainTex, 0)).r, _Parallax, IN.viewDir);
			float3 uv = float3(IN.uv_MainTex + offsetMainTex, textureIndex);
			fixed4 c = UNITY_SAMPLE_TEX2DARRAY(_MainTex, uv) * _Color;
			o.Albedo = c.rgb;

			fixed4 metal_smooth = UNITY_SAMPLE_TEX2DARRAY(_MetallicGlossMap, uv);
			o.Metallic = metal_smooth.g;
			o.Smoothness = metal_smooth.a;

			o.Normal = UnpackScaleNormal(UNITY_SAMPLE_TEX2DARRAY(_BumpMap, uv), _BumpScale);
			o.Occlusion = lerp(1.0, UNITY_SAMPLE_TEX2DARRAY(_OcclusionMap, uv).a, _OcclusionStrength);

			o.Alpha = 1.0;
	FallBack "Diffuse"

The last script i wrote takes a mesh filter from an imported model and writes it to a new separate mesh asset with an index set into uv2.y. I also pack uv2 into uv2.x.

using UnityEngine;
using UnityEditor;
using System;
using System.IO;

struct MeshArray
    public Mesh[] data;

public class ArrayIndexSetter : MonoBehaviour
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] MeshArray[] meshesPerIndex; // Primary index specifies material, the list of meshes then all get this material.

    void Update()
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
        run = false;

        // For each set of meshes assume the index is what we want to specify as material index.
        for (int index = 0; index < meshesPerIndex.Length; ++index)
            // Alter each mesh to contain the index
            foreach (Mesh sharedMesh in meshesPerIndex[index].data)
                // TODO: try to update previously generated version instead of instantiating.

                // Duplicate the mesh (without doing this we can't use 
                // CreateAsset as it will try to update the existing asset which, 
                // for example, may be a part of an FBX file).
                string assetPath = AssetDatabase.GetAssetPath(sharedMesh);
                Mesh mesh = AssetDatabase.LoadAssetAtPath<Mesh>(assetPath);
                mesh = Instantiate(mesh) as Mesh;

                // Query or allocate a UV2 attribute to store the index in
                Vector2[] uv2 = mesh.uv2;
                if (uv2 == null || uv2.Length != mesh.vertexCount)
                    uv2 = new Vector2[mesh.vertexCount];
                for (int i = 0; i < uv2.Length; ++i)
                    // truncate existing data and pack into X component
                    byte[] x = BitConverter.GetBytes(uv2[i].x);
                    byte[] y = BitConverter.GetBytes(uv2[i].y);
                    byte[] data = { x[0], x[1], y[0], y[1] };
                    uv2[i].x = BitConverter.ToSingle(data, 0);
                    // add our index to the end
                    uv2[i].y = index;

                // update and serialize
                mesh.uv2 = uv2;
                string dst = assetPath + "_indexed.asset";
                if (File.Exists(dst))
                AssetDatabase.CreateAsset(mesh, dst);


The result can render these 4 meshes with 4 different looks as a single draw call.
The meshes are generated & set to static so unity can combine them.
In this screenshot you see 2 draw calls as there is the draw-and-shade and the blit-to-screen call.
Enabling shadows would add a shadow cast and a collect call on top, but subsequent meshes would not increase this count.

PS: The textures I used come from https://freepbr.com/.

6 thoughts on “Texture Arrays in Unity

  1. I get errors like this when trying to run MaterialArray:

    Material doesn’t have a texture property ‘albedo_0_0’
    MaterialArray:Update() (at Assets/Game/Scripts/MaterialArray.cs:74)

    What do I need to enter as property name? A usage example would be very useful!

  2. Sounds like you’re trying to get a texture “albedo_0_0” on a shader that doesn’t have such a uniform.

  3. Currently, I’m implementing texture arrays for my indie project in Unity 2018.3 HDRP, and this is very helpful to know how to get uv+w for indexing. Many thanks!
    But I’ve got 2 errors in last script you posted (other scripts were passed over by different approaches):
    – ArrayIndexSetter.cs(32,41): error CS1579: foreach statement cannot operate on variables of type ‘MeshArray’ because
    ‘MeshArray’ does not contain a public instance definition for ‘GetEnumerator’

    ArrayIndexSetter.cs(40,59): error CS0118: ‘mesh’ is a variable but is used like a type

  4. I don’t have any unity installations at hand, but the 32,41 error seems logical, I try to iterate over “meshesPerIndex[index]” but should be “meshesPerIndex[index].data” (the data property).
    The other error is a typo in “LoadAssetAtPath“, should be “LoadAssetAtPath” (capital Mesh)

  5. I’m trying to understand how normal maps work with this shader. It seems like you are using them – would you mind explaining?

  6. I’m not sure what your question is there, as that sounds a little broad. To just get normal maps to work I lean into a few Unity features.
    First in the Properties section I prefix the texture input with [Normal]. This will show a warning in the UI if the given texture is not imported as a normal map. The texture must be set to normal map for the other features to work.

    Then pretty much everything else happesn in this one liner that sets the tangent-space normal for the pixel:
    o.Normal = UnpackScaleNormal(UNITY_SAMPLE_TEX2DARRAY(_BumpMap, uv), _BumpScale);

    UNITY_SAMPLE_TEX2DARRAY samples the _BumpMap texture array at the given uv, where uv.z is the array index and uv.xy are the usual uv coordinates.

    UnpackScaleNormal converts the texture sample to a useable normal, considering the texture data is probably DXT5nm it will do something like this:
    float3 normal;
    normal.rg = (textureData.zw * 2.0 – 1.0) * _BumpScale;
    normal.b = sqrt(saturate(1.0 – dot(normal.rg, normal.rg)));

    The o.Normal is simply a tangent-space normal, so pretty much directly the texture sample.
    Because this is a surface shader unity itself will convert the resulting normal to the right space for lighting.

    All that is also mentioned in the docs:

    Note: all this breaks down once you try to read the world space normal in the surface shader as well. There’s various Unity forum posts mentioning how to do this with “float3 worldNormal; INTERNAL_DATA” in the input struct, I haven’t tried so I recommend googling that if you need to combine world nromal input with bump map output (e.g. when implementing tri-planar texturing).

Leave a Reply

Your email address will not be published. Required fields are marked *