Texture Arrays in Unity

Recently I messed around with Texture Arrays as alternative for Texture Atlases.

I’ve heard of this feature before but never really touched it, and still find a lot of people doing texture atlassing. So here’s my two cents at making the internet have more mentions of texture arrays!

Why combine any textures?
Because when we have one model with one material it is cheaper to draw than many models with many different materials (we must split up models per-material at least).

So for every material we have a model and for every model we have to do a draw call. Then again for each shadow map / cascade, and so on. This amplifies the draw calls per mesh greatly, large numbers of draw calls make us slow, the CPU is communicating with the GPU a lot, we don’t want this.

We ideally just want 1 mesh to draw, although at some point we have to cut it up into chunks for level of detail, frustum and occlusion culling to reduce GPU load, but then we are doing draw calls to improve performance, not lose it!

The problem with Atlases
When you create a texture atlas, you may put several textures into one, so that one material and one mesh can be created.

Without proper tooling an artist may manually have to combine meshes, combine textures and move texture coordinates to be in the right part of the atlas. It also limits texture coordinates to be in 0 to 1 range. Big texture coordinates to introduce tiling would now look at different textures in the atlas.

Then there is a problem with mip mapping. If we naively mip map non-square textures, we can get a lot of bleeding between the individual textures. If all our textures are the same resolution, and a tool mip maps before atlassing, we can mitigate this issue somewhat.

Then we just have the problem of (tri)linear interpolation bleeding across borders. If texture coordinates touch the edge of a texture in the atlas, the pixel starts being interpolated with the adjacent pixel.

We can again mitigate this by moving our texture coordinates 1 pixel away from the texture borders, but as mip levels increase the resolution decreases, so to do this without issues we must consider the highest mip level and leave a border as big as the texture. That space waste is too much.

So, mip-mapping and texture atlassing are not exactly good friends.

Introducing texture Arrays
Texture arrays are just a list of textures. This allows us to combine e.g. the color maps of a bunch of materials into one array, so that one material can just use the array instead of having multiple materials. The limitation being that all textures must be of the same size and internal format.

It brings back the ability to use mip mapping, and has all the other benefits of atlassing (less materials leading to less draw calls).

The good news is that all an artist needs to do is assign a per-vertex attribute to identify what texture to use (if you have a vertex color multiplier, consider sacrificing it’s alpha channel; add a w component to your normal, whatever works).

The bad news is that we need to do some tooling to make this work at all (there is no real manual way for an artist to create a texture array and existing shaders will not support them).

There is a risk of pushing too many textures into one array, it’ll become hard to debug memory if there are many textures of unused assets mixed with data that we need to load. Matching used vertex attributes with texture array size could help analyze unused entries.

I did some of this in Unity while experimenting how viable a solution this was. The code is not really polished and I didn’t use editor scripts (because I could avoid doing UI with an ExecuteInEditMode component) but I’ll share it anyways!

This script can take a set of materials and write a given set of attributes to a folder as texture arrays (and material using texture arrays).

using System;
using System.Linq;
using System.IO;
using UnityEngine;
using UnityEditor;

/* Match material input names with their respective texture (array) settings. */
[Serializable]
struct PropertyCombineSettings
{
    public string name; // material texture2D property to put in array
    public int width; // assume all materials have textures of this resolution
    public int height;
    public Color fallback; // if the property isn't used use this color ((0,0.5,0,0.5) for normals)
    public TextureFormat format; // assume all materials have textures of this format
    public bool linear; // are inputs linear? (true for normal maps)
}

[ExecuteInEditMode]
public class MaterialArray : MonoBehaviour
{
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] Material[] inputs; // List of materials to push into texture array.
    [SerializeField] string outputPath; // Save created texture arrays in this folder.
    [SerializeField] PropertyCombineSettings[] properties; // Set of material inputs to process (and how).

    void Update()
    {
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
            return;
        run = false;

        // Ensure we have a folder to write to
        string absPath = Path.GetFullPath(outputPath);
        if (!Directory.Exists(absPath))
        {
            Debug.Log(String.Format("Path not found {0}", absPath));
            return;
        }

        // Combine one property at a time
        Texture2DArray[] results = new Texture2DArray[properties.Length];
        for(int i = 0; i < properties.Length; ++i)
        {
            // Delete existing texture arrays from disk as we can not alter them
            PropertyCombineSettings property = properties[i];
            string dst = outputPath + "/" + property.name + ".asset";
            if (File.Exists(dst))
            {
                AssetDatabase.DeleteAsset(dst);
            }

            // Create new texture array (of right resolution and format) to write to
            Texture2DArray output = new Texture2DArray(property.width, property.height, inputs.Length, property.format, true, property.linear);
            results[i] = output;

            Texture2D fallback = null;
            int layerIndex = 0;
            
            // For each material process the property for this array
            foreach (Material input in inputs)
            {
                Texture2D layer = input.GetTexture(property.name) as Texture2D;

                // If the material does not have a texture for this slot, fill the array with a flat color
                if (layer == null)
                {
                    Debug.Log(String.Format("Skipping empty parameter {0} for material {1}", property.name, input));
                    if(fallback == null)
                    {
                        // Generate a fallback texture with a flat color of the right format and size
                        TextureFormat fmt = property.format;
                        if (fmt == TextureFormat.DXT1) // We can't write to compressed formats, use uncompressed version and then compress
                            fmt = TextureFormat.RGB24;
                        else if (fmt == TextureFormat.DXT5)
                            fmt = TextureFormat.RGBA32;
                        fallback = new Texture2D(property.width, property.height, fmt, true, property.linear);
                        fallback.SetPixels(Enumerable.Repeat(property.fallback, property.width * property.height).ToArray());
                        fallback.Apply();
                        if (fmt != property.format) // Compress to final format if necessary
                            EditorUtility.CompressTexture(fallback, property.format, TextureCompressionQuality.Fast);
                    }
                    layer = fallback;
                }

                // Validate input data
                if (layer.format != property.format)
                {
                    Debug.LogError(String.Format("Format mismatch on {0} / {1}. Is {2}, must be {3}.", input, property.name, layer.format, property.format));
                    layerIndex += 1;
                    continue;
                }

                if (layer.width != property.width || layer.height != property.height)
                {
                    Debug.LogError(String.Format("Resolution mismatch on {0} / {1}", input, property.name));
                    layerIndex += 1;
                    continue;
                }

                // Copy input texture into array
                Graphics.CopyTexture(layer, 0, output, layerIndex);
                layerIndex += 1;
            }
            AssetDatabase.CreateAsset(output, dst);
        }

        // Create or get a material and assign the texture arrays
        // Unity keeps losing connections when re-saving the texture arrays so this is my workaround to avoid manually allocating
        string mtlDst = outputPath + ".mat";
        Material mtl = AssetDatabase.LoadAssetAtPath(mtlDst);
        bool create = false;
        if(mtl == null)
        {
            create = true;
            mtl = new Material(Shader.Find("Custom/NewShader"));
        }

        for (int i = 0; i < properties.Length; ++i)
        {
            PropertyCombineSettings property = properties[i];
            mtl.SetTexture(property.name, results[i]);
        }

        if (create)
        {
            AssetDatabase.CreateAsset(mtl, mtlDst);
        }

        AssetDatabase.SaveAssets();
    }
}

This is a surface shader that mimics unity's standard shader for a large part, but using texture arrays! It handles indexing by accessing uv2.y, it assumes uv2.x contains the actual uv2 as two float16 packed together.

Shader "Custom/NewShader" {
	Properties {
		_Color("Color", Color) = (1,1,1,1)
		_MainTex ("Albedo (RGB)", 2DArray) = "" {}

		// _Glossiness("Smoothness", Range(0.0, 1.0)) = 0.5
		// _GlossMapScale("Smoothness Scale", Range(0.0, 1.0)) = 1.0
		// [Enum(Metallic Alpha,0,Albedo Alpha,1)] _SmoothnessTextureChannel("Smoothness texture channel", Float) = 0
		_MetallicGlossMap("Metallic", 2DArray) = "" {}

		_BumpScale("Scale", Float) = 1.0
		[Normal] _BumpMap("Normal Map", 2DArray) = "" {}

		_Parallax("Height Scale", Range(0.005, 0.08)) = 0.02
		_ParallaxMap("Height Map", 2DArray) = "" {}

		_OcclusionStrength("Strength", Range(0.0, 1.0)) = 1.0
		_OcclusionMap("Occlusion", 2DArray) = "" {}
		
		// _EmissionColor("Color", Color) = (0,0,0)
		// _EmissionMap("Emission", 2D) = "white" {}
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200

		CGPROGRAM
		// Physically based Standard lighting model, and enable shadows on all light types
		#pragma surface surf Standard fullforwardshadows

		// Use shader model 3.0 target, to get nicer looking lighting
		#pragma target 3.0

		fixed4 _Color;
		UNITY_DECLARE_TEX2DARRAY(_MainTex);
		UNITY_DECLARE_TEX2DARRAY(_MetallicGlossMap);
		half _Metallic;
		half _BumpScale;
		UNITY_DECLARE_TEX2DARRAY(_BumpMap);
		half _Parallax;
		UNITY_DECLARE_TEX2DARRAY(_ParallaxMap);
		half _OcclusionStrength;
		UNITY_DECLARE_TEX2DARRAY(_OcclusionMap);

		UNITY_INSTANCING_BUFFER_START(Props)
		// put more per-instance properties here
		UNITY_INSTANCING_BUFFER_END(Props)

		struct Input
		{
			float2 uv_MainTex;
			float2 uv2_BumpMap;
			float3 viewDir;
		};

		// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
		// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
		// #pragma instancing_options assumeuniformscaling
		UNITY_INSTANCING_BUFFER_START(Props)
		// put more per-instance properties here
		UNITY_INSTANCING_BUFFER_END(Props)

		void surf (Input IN, inout SurfaceOutputStandard o) 
		{
			uint xy = asuint(IN.uv2_BumpMap.x);
			uint mask = ((1 << 16) - 1);
			float2 uv2 = float2(asfloat(uint(xy & mask)),
								asfloat(uint((xy >> 16) & mask)));
			float textureIndex = IN.uv2_BumpMap.y;

			float2 offsetMainTex = ParallaxOffset(UNITY_SAMPLE_TEX2DARRAY(_ParallaxMap, float3(IN.uv_MainTex, 0)).r, _Parallax, IN.viewDir);
			float3 uv = float3(IN.uv_MainTex + offsetMainTex, textureIndex);
			
			fixed4 c = UNITY_SAMPLE_TEX2DARRAY(_MainTex, uv) * _Color;
			o.Albedo = c.rgb;

			fixed4 metal_smooth = UNITY_SAMPLE_TEX2DARRAY(_MetallicGlossMap, uv);
			o.Metallic = metal_smooth.g;
			o.Smoothness = metal_smooth.a;

			o.Normal = UnpackScaleNormal(UNITY_SAMPLE_TEX2DARRAY(_BumpMap, uv), _BumpScale);
			
			o.Occlusion = lerp(1.0, UNITY_SAMPLE_TEX2DARRAY(_OcclusionMap, uv).a, _OcclusionStrength);

			o.Alpha = 1.0;
		}
		ENDCG
	}
	FallBack "Diffuse"
}

The last script i wrote takes a mesh filter from an imported model and writes it to a new separate mesh asset with an index set into uv2.y. I also pack uv2 into uv2.x.

using UnityEngine;
using UnityEditor;
using System;
using System.IO;

[Serializable]
struct MeshArray
{
    public Mesh[] data;
}

[ExecuteInEditMode]
public class ArrayIndexSetter : MonoBehaviour
{
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] MeshArray[] meshesPerIndex; // Primary index specifies material, the list of meshes then all get this material.

    void Update()
    {
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
            return;
        run = false;

        // For each set of meshes assume the index is what we want to specify as material index.
        for (int index = 0; index < meshesPerIndex.Length; ++index)
        {
            // Alter each mesh to contain the index
            foreach (Mesh sharedMesh in meshesPerIndex[index])
            {
                // TODO: try to update previously generated version instead of instantiating.

                // Duplicate the mesh (without doing this we can't use 
                // CreateAsset as it will try to update the existing asset which, 
                // for example, may be a part of an FBX file).
                string assetPath = AssetDatabase.GetAssetPath(sharedMesh);
                Mesh mesh = AssetDatabase.LoadAssetAtPath(assetPath);
                mesh = Instantiate(mesh) as Mesh;

                // Query or allocate a UV2 attribute to store the index in
                Vector2[] uv2 = mesh.uv2;
                if (uv2 == null || uv2.Length != mesh.vertexCount)
                    uv2 = new Vector2[mesh.vertexCount];
                for (int i = 0; i < uv2.Length; ++i)
                {
                    // truncate existing data and pack into X component
                    byte[] x = BitConverter.GetBytes(uv2[i].x);
                    byte[] y = BitConverter.GetBytes(uv2[i].y);
                    byte[] data = { x[0], x[1], y[0], y[1] };
                    uv2[i].x = BitConverter.ToSingle(data, 0);
                    // add our index to the end
                    uv2[i].y = index;
                }

                // update and serialize
                mesh.uv2 = uv2;
                string dst = assetPath + "_indexed.asset";
                if (File.Exists(dst))
                    File.Delete(dst);
                AssetDatabase.CreateAsset(mesh, dst);
            }
        }

        AssetDatabase.SaveAssets();
    }
}

The result can render these 4 meshes with 4 different looks as a single draw call.
The meshes are generated & set to static so unity can combine them.
In this screenshot you see 2 draw calls as there is the draw-and-shade and the blit-to-screen call.
Enabling shadows would add a shadow cast and a collect call on top, but subsequent meshes would not increase this count.

PS: The textures I used come from https://freepbr.com/.

Parallax mapping by marching

I had this idea thanks to the existance of raymarching. Currently figuring out self shadowing. Last half hour was messing with defines and such to get it to work on PS2.0, which it does now although I had to strip out the _Color and _SpecularColor in that version (althouh the properties are still defined the multiplications simply aren’t done).

Also seeing if maybe I need to make a PS3.0 version with loops instead of this hideous stack of if-checks. I just didn’t want to struggle with compiling so this was easier right now.

Shader "Custom/ParallaxMarching" 
{
	Properties 
	{
		_MainTex ("Base (RGBA)", 2D) = "white" {}
		_Color ("Color (RGBA)", Color) = (1,1,1,1)
		_NormalMap ("Tangent normals", 2D) = "bump" {}
		_HeightMap ("Height map (R)", 2D) = "white" {}
		_Intensity ("Intensity", Float) = 0.001
		
		_SpecularPower ("Specular power", Float) = 100
		_SpecularFresnel ("Specular fresnel falloff", Float) = 4
		_SpecularTex ("Specular texture (RGB)", 2D) = "white" {}
		_SpecularColor ("Specular color (RGB)", Color) = (1,1,1,1)
	}
	
	CGINCLUDE
	//only 4 steps in shader program 2.0
	//10 steps is max & prettiest, 2 steps is min
	#define PARALLAX_STEPS 4
	#define INTENSITYSCALE 5/PARALLAX_STEPS
	//#define SPECULAR_FRESNEL
	#define OPTIMIZE_PS20
	
	uniform sampler2D _MainTex;
	uniform half4 _Color;
	uniform sampler2D _NormalMap;
	uniform sampler2D _HeightMap;
	uniform half _Intensity;
	uniform half _SpecularPower;
	uniform half _SpecularFresnel;
	uniform sampler2D _SpecularTex;
	uniform half4 _SpecularColor;
	
	#include "BaseFunctions.cginc"
	
	half4 frag_parallax(v2f i) : COLOR
	{
		//get some normalized vectors
		half3 worldBiTangent = cross(i.worldTangent, i.worldNormal);
		half3 cameraDirection = normalize(i.worldPosition - _WorldSpaceCameraPos);
		
		//determine what the tangent space step is from this view angle
		half2 uvstep = half2( dot( cameraDirection, i.worldTangent ),
		  dot( cameraDirection, worldBiTangent ) ) * _Intensity;
		uvstep *= INTENSITYSCALE;
		
		//iteratively sample until a point is hit
		half2 uv = i.uv;
		
		#if PARALLAX_STEPS > 1
		half mapDepth0 = 1-tex2D(_HeightMap, uv).r;
		half mapDepth1 = 1-tex2D(_HeightMap, uv + uvstep).r;
		#endif
		#if PARALLAX_STEPS > 2
		half mapDepth2 = 1-tex2D(_HeightMap, uv + uvstep*2).r;
		#endif
		#if PARALLAX_STEPS > 3
		half mapDepth3 = 1-tex2D(_HeightMap, uv + uvstep*3).r;
		#endif
		#if PARALLAX_STEPS > 4
		half mapDepth4 = 1-tex2D(_HeightMap, uv + uvstep*4).r; 
		#endif
		#if PARALLAX_STEPS > 5
		half mapDepth5 = 1-tex2D(_HeightMap, uv + uvstep*5).r;
		#endif
		#if PARALLAX_STEPS > 6
		half mapDepth6 = 1-tex2D(_HeightMap, uv + uvstep*6).r;
		#endif
		#if PARALLAX_STEPS > 7
		half mapDepth7 = 1-tex2D(_HeightMap, uv + uvstep*7).r;
		#endif
		#if PARALLAX_STEPS > 8
		half mapDepth8 = 1-tex2D(_HeightMap, uv + uvstep*8).r;
		#endif
		#if PARALLAX_STEPS > 9
		half mapDepth9 = 1-tex2D(_HeightMap, uv + uvstep*9).r;
		#endif
		
		#if defined(STEPS_10)
		half depthStep = 0.1;
		#else
		half depthStep = 0.2;
		#endif
		
		#if PARALLAX_STEPS > 1
		if( mapDepth0 > 0 && mapDepth1 > depthStep )
		{
			uv = uv + uvstep;
			#if PARALLAX_STEPS > 2
			if( mapDepth2 > depthStep*2 )
			{
				uv = uv + uvstep*2;
				#if PARALLAX_STEPS > 3
				if( mapDepth3 > depthStep*3 )
				{
					uv = uv + uvstep*3; 
					#if PARALLAX_STEPS > 4
					if( mapDepth4 > depthStep*4 )
					{
						uv = uv + uvstep*4;
						#if PARALLAX_STEPS > 5
						if( mapDepth5 > depthStep*5 )
						{
							uv = uv + uvstep*5;
							#if PARALLAX_STEPS > 6
							if( mapDepth6 > depthStep*6 )
							{
								uv = uv + uvstep*6;
								#if PARALLAX_STEPS > 7
								if( mapDepth7 > depthStep*7 )
								{
									uv = uv + uvstep*7;
									#if PARALLAX_STEPS > 8
									if( mapDepth8 > depthStep*8 )
									{
										uv = uv + uvstep*8;
										
										#if PARALLAX_STEPS > 9
										if( mapDepth9 > depthStep*9 )
										{
											uv = uv + uvstep*9;
										}
										#endif
									}
									#endif
								}
								#endif
							}
							#endif
						}
						#endif
					} 
					#endif
				}
				#endif
			}
			#endif
		}
		#endif
		
		//apply normal mapping
		half3 N = half3(tex2D(_NormalMap, i.uv).ra*2.0-1.0, 1.0);
		N = normalize( mul( N, float3x3(i.worldTangent, worldBiTangent, i.worldNormal) ) );
		
		//implement some lighting
		half3 L = _WorldSpaceLightPos0.xyz;
		half atten = 1.0;
		#ifndef OPTIMIZE_PS20
		if( _WorldSpaceLightPos0.w == 1 )
		{
			L -= i.worldPosition;
		#endif
			#ifdef OPTIMIZE_PS20
		 	//multiplying the worldPosition by 0 costs less instructions
			L -= i.worldPosition * _WorldSpaceLightPos0.w;
			#endif
			//it does mean that for directional lights these calculations are all useless and slow down the shader
			half invLightDistance = 1.0 / length(L);
			L *= invLightDistance;
			atten *= invLightDistance;
		#ifndef OPTIMIZE_PS20
		}
		#endif
		half NdotL = max(0, dot(L, N));
		
		half3 R = reflect(cameraDirection, N);
		half RdotL = max(0, dot(R, L))*atten;
		RdotL = pow(RdotL, _SpecularPower);
		
		#ifdef SPECULAR_FRESNEL
		half FR = dot(cameraDirection, N);
		if( _SpecularFresnel < 0 )
			RdotL *= pow(1-FR, _SpecularFresnel);
		else
			RdotL *= pow(FR, _SpecularFresnel);
		#endif
		
		half4 outColor = NdotL * _LightColor0 * tex2D(_MainTex, uv)
#ifndef OPTIMIZE_PS20
		* _Color.xyz
#endif
;
		outColor.xyz += RdotL * _LightColor0.xyz * tex2D(_SpecularTex, uv).xyz
#ifndef OPTIMIZE_PS20
		* _SpecularColor.xyz
#endif
;
		return outColor;
	}
	ENDCG
	
	SubShader 
	{
		Tags { "RenderType"="Opaque" }
		
		Pass
		{ 
			Tags{ "LightMode" = "ForwardBase" }
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag_parallax
			#pragma target 2.0
			#pragma only_renderers d3d9 
			#pragma fragmentoption ARB_precision_hint_fastest 
			//required for lights to update
			#pragma multi_compile_fwdbase_fullshadows
			ENDCG
		}
	} 
	FallBack "Diffuse"
}

Applying decals

Creating a tool for applying decals is fairly useful…

I consider the case in which a decal is a single planar polygon with a texture (be it transparent, cutout or neither)

An artist would want to take the polygon (with texture) and parent it to the camera, then the artist can move around until a nice spot for the decal is found. In the meantime the decal can be panned and rotated (only around the camera forward axis) so it always remains in the same plane as the viewport.

Once the decal looks nice from the point of view of the camera, it needs to be applied in 3D and separated from the camera.

This initially is a straightforward action: for each vertex we need to shoot a ray from the camera position, through the vertex, onto our world or target mesh. The intersection point minus a little offset (to avoid coplanar faces which cause flickering) is where the vertex should be set.

Problems arise however when applying the decal on a curved surface, it will be floating in front of or penetrating through the surface…

This can be solved in another way. Fastest is if we know what mesh we wish to apply our decal on, otherwise we’d need to combine the world mesh for this and that gets heavy to compute really fast.

Transform the mesh to world space, then transform it to camera space (or the decal’s object space).
Now divide x,y by z to put the mesh onto the view plane, from this point on, our case is 2D.
Foreach edge in the mesh: check intersection with each edge in the decal polygon.
If they intersect, insert a vertex into the polygon (or schedule the insertion until all checks are performed).
When done inserting vertices (which in a simple single non-triangulated polygon case is easily accomplished by inserting a point at the right place in the list).
We can first merge the vertices with a small tolerance in case we graze the tip of a triangle and split redundantly much at some point.

Here we have an original vertices and three splits very close to eachother merging these will make the result cleaner and have less ugly normals.

Next things to do:
Raycast all the vertices in world space onto the mesh in world space as described before
Triangulate
Calculate normals (cross product 2 edges of each triangle: (p2-p0) X (p1-p0))
Unparent from the camera (if you wrote a tool that did that for the artist)
Done!

Note that it is possible to flatten the decal polygon in camera space as well, making that it does not need to be parented to the camera and a user is free to modify it in any way.

Delaunay Triangulation

dt

I was trying to cap polygons with holes and all my gaps were coplanar. As opposed to my initial solution (finding edge rings and capping those, leaving me with no clue how to cut out holes) I decided to look up how to go about this. This was really a learning exercise and it proved more complex than I initially anticipated.

So the idea, and some illustrations displaying that I wasn’t the first to run into the problem of capping edge rings come from here:
Jose Esfer

But the most difficulty I had with finding a good reference for this. There’s a lot of scientific papers that go very, very far beyond my understanding of mathematical jargon and there’s a number of source code examples in various languages, but the few I opened were lengthy so I knew the only way to understand this was with a decent explanation, finally to be discovered here:
Computing constrained Delaunay triangulations

This explanation is pretty clear although it took me over a day to implement, further errors are yet to be discovered…

The only thing to watch for is the explanation of the actual triangulation algorithm where the first ‘New LR-edge’ is inserted and some edge gets deleted one image yet is not deleted in the next (and in my implementation hence never deleted). So following what the text actually says got me there.

UPDATE: Added debug output -> define UDT_Debug in the conditional compilation symbols of the project settings (also for the editor project) and get some control on where to stop triangulating. It displays potential new connection candidates as circles (green and blue) as well as the last edge (red) and the last deleted edge(s) (green).

Also randomization now generates input when you check it and then unchecks itself.

I’m not sure if it’s useful to post the source I ended up with, because it’ll be just another example out there, but since it’s for Unity it may be easier to get it running. Do select the GameObject you attach it to though because it relies on OnDrawSceneGUI which only gets called for selected objects.

It displays the current vertex numbers as well, which may give some confusion because the vertices get dynamically rearranged so this does not reflect your actual input.

Grab the package here!