C++ Windows messages

I find myself always wanting to pair asserts, exceptions and log messages with sprintf. So here’s some utilities using OutputDebugString and MessageBox for future reference!

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
#include <Windows.h>
#include <cstdio>

void _Message(char* title, unsigned int flags, char* fmt, va_list args)
{
	int size;
#pragma warning(suppress:28719)    // 28719
	size = vsnprintf(nullptr, 0, fmt, args);

	char* message = new char[size];
	vsnprintf(message, size, fmt, args);

	if (IsDebuggerPresent())
		OutputDebugStringA(message);
	else
		MessageBoxA(0, message, title, flags);
	delete message;
}

void Info(char* fmt, ...)
{
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Info", MB_OK | MB_ICONINFORMATION, fmt, args);
	__crt_va_end(args);
}

void Warning(char* fmt, ...)
{
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Warning", MB_OK | MB_ICONWARNING, fmt, args);
	__crt_va_end(args);
	if (IsDebuggerPresent())
		DebugBreak();
}

void Error(char* fmt, ...)
{
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Error", MB_OK | MB_ICONEXCLAMATION, fmt, args);
	__crt_va_end(args);
	if (IsDebuggerPresent())
		DebugBreak();
}

void Fatal(char* fmt, ...)
{
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Error", MB_OK | MB_ICONEXCLAMATION, fmt, args);
	__crt_va_end(args);
	if (IsDebuggerPresent())
		DebugBreak();
	else
		ExitProcess(0);
}

void Assert(bool expression)
{
	if (expression)
		return;
	if (IsDebuggerPresent())
		DebugBreak();
}

void Assert(bool expression, char* fmt, ...)
{
	if (expression)
		return;
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Error", MB_OK | MB_ICONEXCLAMATION, fmt, args);
	__crt_va_end(args);
	if (IsDebuggerPresent())
		DebugBreak();
}

void AssertFatal(bool expression)
{
	if (expression)
		return;
	if (IsDebuggerPresent())
		DebugBreak();
	else
		ExitProcess(0);
}

void AssertFatal(bool expression, char* fmt, ...)
{
	if (expression)
		return;
	va_list args;
	__crt_va_start(args, fmt);
	_Message("Error", MB_OK | MB_ICONEXCLAMATION, fmt, args);
	__crt_va_end(args);
	if (IsDebuggerPresent())
		DebugBreak();
	else
		ExitProcess(0);
}

Converting Unreal4 textures to Unity

I ported some assets from UE4 to Unity and wrote a script to split up textures to match Unity’s material system. Additionally I wrote a script in Unity to auto-create materials for the output.

It does assume some very strict naming and file structure layout and I have not looked at any color conversions.

I don’t know if Unity and UE4 have the same roughness response for example, or whether I need to pow() the roughness channel by 0.5 or 2.0, nor do I know if I need to do gamma correction on some of the data.

Example input files:
“Textures/Bricks_BaseColor.png”
“Textures/Bricks_Normal.png”
“Textures/Bricks_OcclusionRoughnessMetallic.png”
“Textures/Grass_BaseColor.png”
“Textures/Grass_Normal.png”
“Textures/Grass_OcclusionRoughnessMetallic.png”

For every texture set name we must have exactly those 3 _BaseColor.png, _Normal.png, _OcclusionRoughnessMetallic.png textures. This data will be interpreted as a “Bricks” and a “Grass” material.

The output will then be as follows:
“Textures/Unity/Bricks/Bricks_MainTex.png”
“Textures/Unity/Bricks/Bricks_BumpMap.png”
“Textures/Unity/Bricks/Bricks_OcclusionMap.png”
“Textures/Unity/Bricks/Bricks_MetallicGlossMap.png”

A new folder is introduced and textures a separated into Unity’s material format. File names match Unity material parameters so they can be picked up by a script.

import shutil
import os
import ctypes
import traceback

from PyQt4.QtCore import *
from PyQt4.QtGui import *
from contextlib import contextmanager


@contextmanager
def edit(path):
    img = QImage(path).convertToFormat(QImage.Format_ARGB32)
    bits = ctypes.c_void_p(img.bits().__int__())
    bits = ctypes.cast(bits, ctypes.POINTER(ctypes.c_int * (img.width() * img.height())))[0]
    yield bits, img.width(), img.height()
    img.save(path)


def gather():
    textureSets = {}
    for name in os.listdir('.'):
        if not name.lower().endswith('.png'):
            continue
        key, sub = name.rsplit('_', 1)
        data = textureSets.get(key, [])
        data.append(sub)
        textureSets[key] = data

    # validate sets
    for name, members in textureSets.iteritems():
        if set(members) != set(('BaseColor.png', 'Normal.png', 'OcclusionRoughnessMetallic.png')):
            raise RuntimeError('Unexpected texture set %s' % name)

    return textureSets


def convert(name):
    # copy maps
    shutil.copy(name + '_BaseColor.png', 'Unity/%s/%s _MainTex.png' % (name, name))

    dst = 'Unity/%s/%s _BumpMap.png' % (name, name)
    shutil.copy(name + '_Normal.png', dst)

    # flip normal map green channel
    with edit(dst) as data:
        pixels, width, height = data
        for px in xrange(width * height):
            # invert green
            g = pixels[px] & 0x0000ff00
            g = (255 - (g >> 8)) << 8 # overwrite green arb = pixels[px] & ~(0x0000ff00) pixels[px] = arb | g # split occlusion from roughness & metallic dst = 'Unity/%s/%s _OcclusionMap.png' % (name, name) shutil.copy(name + '_OcclusionRoughnessMetallic.png', dst) # occlusion can be in A8 format, make monochrome with edit(dst) as data: pixels, width, height = data for px in xrange(width * height): r = (pixels[px] & 0x00ff0000) pixels[px] = 0xff000000 | r | r >> 8 | r >> 16

    dst = 'Unity/%s/%s _MetallicGlossMap.png' % (name, name)
    shutil.copy(name + '_OcclusionRoughnessMetallic.png', dst)

    # unity metallic & smoothness live in R and A respectively
    with edit(dst) as data:
        pixels, width, height = data
        for px in xrange(width * height):
            g = pixels[px] & 0x0000ff00
            b = pixels[px] & 0x000000ff
            g = (255 - (g >> 8)) << 24
            pixels[px] = g | (b << 16)


def run():
    textureSets = gather()
    diag = QProgressDialog()
    diag.setMaximum(len(textureSets))
    diag.show()
    for i, name in enumerate(textureSets):
        diag.setLabelText('Converting: ' + name)
        diag.setValue(i)
        QApplication.processEvents()
        if diag.wasCanceled():
            break
        convert(name)


if __name__ == '__main__':
    app = QApplication([])
    try:
        run()
    except Exception as e:
        QMessageBox.critical(None, 'Error!', e.message + '\n\n' + traceback.format_exc(e))
    app.exec_()

This script must be attached to a game object, then we must enter the folder where we generated the unity format textures (must be in Assets/, I usually generate first and then copy over to Unity). Last we click “run” and it will create a material per folder with all textures assigned. A pop-up will ask us to convert normal maps to the right format and we’re done!

Last but not least, it is possible to compress _OcclusionMap assets to a texture of type “Single Channel”, with Alpha from Grayscale enabled. I have no reason to believe Unity does this automatically so it will save some memory.

using System.IO;
using System;
using UnityEngine;
using UnityEditor;

[Serializable]
struct StringPair
{
    public string input;
    public string output;
}

/*
Given a folder, processes all subfolders recursively. 
If a folder has no subfolders, consider it contains textures for a specific material.
Texture names are assumed to be "<folderName> <materialPropertyName>
There is a renamePairs attribute to rename from one property name to another, e.g.
when you already have a lot of "<folderName> normal" you can rename "normal" to "_BumpMap".

Materials are generated next to the textures, matching the folder name. 
Example file structure:
Textures/
    RustyIron/
        RustyIron _MainTex.png
        RustyIron _BumpMap.png
*/
[ExecuteInEditMode]
public class MaterialGenerator : MonoBehaviour
{
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] string textureSetsRoot = "Assets/Materials/"; // Point to a folder that contains all material sets.
    [SerializeField] StringPair[] renamePairs; // Rename material properties found from input to output before trying to set it on materials.
    [SerializeField] string shaderName = "Standard";

    void ProcessMaterial(string textureSetDir)
    {
        string materialName = Path.GetFileName(textureSetDir);

        string dst = textureSetDir + "/" + materialName + ".mat";

        Material mtl = AssetDatabase.LoadAssetAtPath<Material>(dst);

        bool create = false;

        if (mtl == null)
        {
            create = true;
            mtl = new Material(Shader.Find(shaderName));
        }
        else
        {
            mtl.shader = Shader.Find(shaderName);
        }

        foreach (string texturePath in Directory.GetFiles(textureSetDir))
        {
            Texture texture = AssetDatabase.LoadAssetAtPath<Texture2D>(texturePath);

            if (texture == null)
            {
                continue;
            }
            
            string attributeName = Path.GetFileName(texturePath).Substring(materialName.Length + 1);
            attributeName = attributeName.Substring(0, attributeName.LastIndexOf("."));
            foreach (StringPair rename in renamePairs)
            {
                if (rename.input.ToLower() == attributeName.ToLower())
                {
                    attributeName = rename.output;
                }
            }
            mtl.SetTexture(attributeName, texture);
        }

        if (create)
        {
            Debug.Log(String.Format("Saving material at {0}", dst));
            AssetDatabase.CreateAsset(mtl, dst);
        }
        else
        {
            Debug.Log(String.Format("Updating material at {0}", dst));
        }
    }

    void ParseRecursively(string parentDirectory)
    {
        string[] subDirectories = Directory.GetDirectories(parentDirectory);
        if (subDirectories.Length == 0)
        {
            ProcessMaterial(parentDirectory);
        }
        else
        {
            foreach (string subDir in subDirectories)
            {
                ParseRecursively(subDir);
            }
        }
    }

    void Update()
    {
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
            return;
        run = false;

        // Ensure our source data exists.
        string absPath = Path.GetFullPath(textureSetsRoot);
        if(!Directory.Exists(absPath))
        {
            Debug.Log(String.Format("Path not found {0}", absPath));
            return;
        }

        ParseRecursively(textureSetsRoot);

        AssetDatabase.SaveAssets();
    }
}

Important: I found that I had to view the generated materials in the inspector for them to pick up the normal maps!

Texture Arrays in Unity

Recently I messed around with Texture Arrays as alternative for Texture Atlases.

I’ve heard of this feature before but never really touched it, and still find a lot of people doing texture atlassing. So here’s my two cents at making the internet have more mentions of texture arrays!

Why combine any textures?
Because when we have one model with one material it is cheaper to draw than many models with many different materials (we must split up models per-material at least).

So for every material we have a model and for every model we have to do a draw call. Then again for each shadow map / cascade, and so on. This amplifies the draw calls per mesh greatly, large numbers of draw calls make us slow, the CPU is communicating with the GPU a lot, we don’t want this.

We ideally just want 1 mesh to draw, although at some point we have to cut it up into chunks for level of detail, frustum and occlusion culling to reduce GPU load, but then we are doing draw calls to improve performance, not lose it!

The problem with Atlases
When you create a texture atlas, you may put several textures into one, so that one material and one mesh can be created.

Without proper tooling an artist may manually have to combine meshes, combine textures and move texture coordinates to be in the right part of the atlas. It also limits texture coordinates to be in 0 to 1 range. Big texture coordinates to introduce tiling would now look at different textures in the atlas.

Then there is a problem with mip mapping. If we naively mip map non-square textures, we can get a lot of bleeding between the individual textures. If all our textures are the same resolution, and a tool mip maps before atlassing, we can mitigate this issue somewhat.

Then we just have the problem of (tri)linear interpolation bleeding across borders. If texture coordinates touch the edge of a texture in the atlas, the pixel starts being interpolated with the adjacent pixel.

We can again mitigate this by moving our texture coordinates 1 pixel away from the texture borders, but as mip levels increase the resolution decreases, so to do this without issues we must consider the highest mip level and leave a border as big as the texture. That space waste is too much.

So, mip-mapping and texture atlassing are not exactly good friends.

Introducing texture Arrays
Texture arrays are just a list of textures. This allows us to combine e.g. the color maps of a bunch of materials into one array, so that one material can just use the array instead of having multiple materials. The limitation being that all textures must be of the same size and internal format.

It brings back the ability to use mip mapping, and has all the other benefits of atlassing (less materials leading to less draw calls).

The good news is that all an artist needs to do is assign a per-vertex attribute to identify what texture to use (if you have a vertex color multiplier, consider sacrificing it’s alpha channel; add a w component to your normal, whatever works).

The bad news is that we need to do some tooling to make this work at all (there is no real manual way for an artist to create a texture array and existing shaders will not support them).

There is a risk of pushing too many textures into one array, it’ll become hard to debug memory if there are many textures of unused assets mixed with data that we need to load. Matching used vertex attributes with texture array size could help analyze unused entries.

I did some of this in Unity while experimenting how viable a solution this was. The code is not really polished and I didn’t use editor scripts (because I could avoid doing UI with an ExecuteInEditMode component) but I’ll share it anyways!

This script can take a set of materials and write a given set of attributes to a folder as texture arrays (and material using texture arrays).

using System;
using System.Linq;
using System.IO;
using UnityEngine;
using UnityEditor;

/* Match material input names with their respective texture (array) settings. */
[Serializable]
struct PropertyCombineSettings
{
    public string name; // material texture2D property to put in array
    public int width; // assume all materials have textures of this resolution
    public int height;
    public Color fallback; // if the property isn't used use this color ((0,0.5,0,0.5) for normals)
    public TextureFormat format; // assume all materials have textures of this format
    public bool linear; // are inputs linear? (true for normal maps)
}

[ExecuteInEditMode]
public class MaterialArray : MonoBehaviour
{
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] Material[] inputs; // List of materials to push into texture array.
    [SerializeField] string outputPath; // Save created texture arrays in this folder.
    [SerializeField] PropertyCombineSettings[] properties; // Set of material inputs to process (and how).

    void Update()
    {
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
            return;
        run = false;

        // Ensure we have a folder to write to
        string absPath = Path.GetFullPath(outputPath);
        if (!Directory.Exists(absPath))
        {
            Debug.Log(String.Format("Path not found {0}", absPath));
            return;
        }

        // Combine one property at a time
        Texture2DArray[] results = new Texture2DArray[properties.Length];
        for(int i = 0; i < properties.Length; ++i)
        {
            // Delete existing texture arrays from disk as we can not alter them
            PropertyCombineSettings property = properties[i];
            string dst = outputPath + "/" + property.name + ".asset";
            if (File.Exists(dst))
            {
                AssetDatabase.DeleteAsset(dst);
            }

            // Create new texture array (of right resolution and format) to write to
            Texture2DArray output = new Texture2DArray(property.width, property.height, inputs.Length, property.format, true, property.linear);
            results[i] = output;

            Texture2D fallback = null;
            int layerIndex = 0;
            
            // For each material process the property for this array
            foreach (Material input in inputs)
            {
                Texture2D layer = input.GetTexture(property.name) as Texture2D;

                // If the material does not have a texture for this slot, fill the array with a flat color
                if (layer == null)
                {
                    Debug.Log(String.Format("Skipping empty parameter {0} for material {1}", property.name, input));
                    if(fallback == null)
                    {
                        // Generate a fallback texture with a flat color of the right format and size
                        TextureFormat fmt = property.format;
                        if (fmt == TextureFormat.DXT1) // We can't write to compressed formats, use uncompressed version and then compress
                            fmt = TextureFormat.RGB24;
                        else if (fmt == TextureFormat.DXT5)
                            fmt = TextureFormat.RGBA32;
                        fallback = new Texture2D(property.width, property.height, fmt, true, property.linear);
                        fallback.SetPixels(Enumerable.Repeat(property.fallback, property.width * property.height).ToArray());
                        fallback.Apply();
                        if (fmt != property.format) // Compress to final format if necessary
                            EditorUtility.CompressTexture(fallback, property.format, TextureCompressionQuality.Fast);
                    }
                    layer = fallback;
                }

                // Validate input data
                if (layer.format != property.format)
                {
                    Debug.LogError(String.Format("Format mismatch on {0} / {1}. Is {2}, must be {3}.", input, property.name, layer.format, property.format));
                    layerIndex += 1;
                    continue;
                }

                if (layer.width != property.width || layer.height != property.height)
                {
                    Debug.LogError(String.Format("Resolution mismatch on {0} / {1}", input, property.name));
                    layerIndex += 1;
                    continue;
                }

                // Copy input texture into array
                Graphics.CopyTexture(layer, 0, output, layerIndex);
                layerIndex += 1;
            }
            AssetDatabase.CreateAsset(output, dst);
        }

        // Create or get a material and assign the texture arrays
        // Unity keeps losing connections when re-saving the texture arrays so this is my workaround to avoid manually allocating
        string mtlDst = outputPath + ".mat";
        Material mtl = AssetDatabase.LoadAssetAtPath<Material>(mtlDst);
        bool create = false;
        if(mtl == null)
        {
            create = true;
            mtl = new Material(Shader.Find("Custom/NewShader"));
        }

        for (int i = 0; i < properties.Length; ++i)
        {
            PropertyCombineSettings property = properties[i];
            mtl.SetTexture(property.name, results[i]);
        }

        if (create)
        {
            AssetDatabase.CreateAsset(mtl, mtlDst);
        }

        AssetDatabase.SaveAssets();
    }
}

This is a surface shader that mimics unity’s standard shader for a large part, but using texture arrays! It handles indexing by accessing uv2.y, it assumes uv2.x contains the actual uv2 as two float16 packed together.

Shader "Custom/NewShader" {
	Properties {
		_Color("Color", Color) = (1,1,1,1)
		_MainTex ("Albedo (RGB)", 2DArray) = "" {}

		// _Glossiness("Smoothness", Range(0.0, 1.0)) = 0.5
		// _GlossMapScale("Smoothness Scale", Range(0.0, 1.0)) = 1.0
		// [Enum(Metallic Alpha,0,Albedo Alpha,1)] _SmoothnessTextureChannel("Smoothness texture channel", Float) = 0
		_MetallicGlossMap("Metallic", 2DArray) = "" {}

		_BumpScale("Scale", Float) = 1.0
		[Normal] _BumpMap("Normal Map", 2DArray) = "" {}

		_Parallax("Height Scale", Range(0.005, 0.08)) = 0.02
		_ParallaxMap("Height Map", 2DArray) = "" {}

		_OcclusionStrength("Strength", Range(0.0, 1.0)) = 1.0
		_OcclusionMap("Occlusion", 2DArray) = "" {}
		
		// _EmissionColor("Color", Color) = (0,0,0)
		// _EmissionMap("Emission", 2D) = "white" {}
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200

		CGPROGRAM
		// Physically based Standard lighting model, and enable shadows on all light types
		#pragma surface surf Standard fullforwardshadows

		// Use shader model 3.0 target, to get nicer looking lighting
		#pragma target 3.0

		fixed4 _Color;
		UNITY_DECLARE_TEX2DARRAY(_MainTex);
		UNITY_DECLARE_TEX2DARRAY(_MetallicGlossMap);
		half _Metallic;
		half _BumpScale;
		UNITY_DECLARE_TEX2DARRAY(_BumpMap);
		half _Parallax;
		UNITY_DECLARE_TEX2DARRAY(_ParallaxMap);
		half _OcclusionStrength;
		UNITY_DECLARE_TEX2DARRAY(_OcclusionMap);

		UNITY_INSTANCING_BUFFER_START(Props)
		// put more per-instance properties here
		UNITY_INSTANCING_BUFFER_END(Props)

		struct Input
		{
			float2 uv_MainTex;
			float2 uv2_BumpMap;
			float3 viewDir;
		};

		// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
		// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
		// #pragma instancing_options assumeuniformscaling
		UNITY_INSTANCING_BUFFER_START(Props)
		// put more per-instance properties here
		UNITY_INSTANCING_BUFFER_END(Props)

		void surf (Input IN, inout SurfaceOutputStandard o) 
		{
			uint xy = asuint(IN.uv2_BumpMap.x);
			uint mask = ((1 << 16) - 1);
			float2 uv2 = float2(asfloat(uint(xy & mask)),
								asfloat(uint((xy >> 16) & mask)));
			float textureIndex = IN.uv2_BumpMap.y;

			float2 offsetMainTex = ParallaxOffset(UNITY_SAMPLE_TEX2DARRAY(_ParallaxMap, float3(IN.uv_MainTex, 0)).r, _Parallax, IN.viewDir);
			float3 uv = float3(IN.uv_MainTex + offsetMainTex, textureIndex);
			
			fixed4 c = UNITY_SAMPLE_TEX2DARRAY(_MainTex, uv) * _Color;
			o.Albedo = c.rgb;

			fixed4 metal_smooth = UNITY_SAMPLE_TEX2DARRAY(_MetallicGlossMap, uv);
			o.Metallic = metal_smooth.g;
			o.Smoothness = metal_smooth.a;

			o.Normal = UnpackScaleNormal(UNITY_SAMPLE_TEX2DARRAY(_BumpMap, uv), _BumpScale);
			
			o.Occlusion = lerp(1.0, UNITY_SAMPLE_TEX2DARRAY(_OcclusionMap, uv).a, _OcclusionStrength);

			o.Alpha = 1.0;
		}
		ENDCG
	}
	FallBack "Diffuse"
}

The last script i wrote takes a mesh filter from an imported model and writes it to a new separate mesh asset with an index set into uv2.y. I also pack uv2 into uv2.x.

using UnityEngine;
using UnityEditor;
using System;
using System.IO;

[Serializable]
struct MeshArray
{
    public Mesh[] data;
}

[ExecuteInEditMode]
public class ArrayIndexSetter : MonoBehaviour
{
    [SerializeField] bool run = false; // Tick this to let an Update() call process all data.
    [SerializeField] MeshArray[] meshesPerIndex; // Primary index specifies material, the list of meshes then all get this material.

    void Update()
    {
        // Run once in Update() and then disable again so we can process errors, or we are done.
        if (!run)
            return;
        run = false;

        // For each set of meshes assume the index is what we want to specify as material index.
        for (int index = 0; index < meshesPerIndex.Length; ++index)
        {
            // Alter each mesh to contain the index
            foreach (Mesh sharedMesh in meshesPerIndex[index].data)
            {
                // TODO: try to update previously generated version instead of instantiating.

                // Duplicate the mesh (without doing this we can't use 
                // CreateAsset as it will try to update the existing asset which, 
                // for example, may be a part of an FBX file).
                string assetPath = AssetDatabase.GetAssetPath(sharedMesh);
                Mesh mesh = AssetDatabase.LoadAssetAtPath<Mesh>(assetPath);
                mesh = Instantiate(mesh) as Mesh;

                // Query or allocate a UV2 attribute to store the index in
                Vector2[] uv2 = mesh.uv2;
                if (uv2 == null || uv2.Length != mesh.vertexCount)
                    uv2 = new Vector2[mesh.vertexCount];
                for (int i = 0; i < uv2.Length; ++i)
                {
                    // truncate existing data and pack into X component
                    byte[] x = BitConverter.GetBytes(uv2[i].x);
                    byte[] y = BitConverter.GetBytes(uv2[i].y);
                    byte[] data = { x[0], x[1], y[0], y[1] };
                    uv2[i].x = BitConverter.ToSingle(data, 0);
                    // add our index to the end
                    uv2[i].y = index;
                }

                // update and serialize
                mesh.uv2 = uv2;
                string dst = assetPath + "_indexed.asset";
                if (File.Exists(dst))
                    File.Delete(dst);
                AssetDatabase.CreateAsset(mesh, dst);
            }
        }

        AssetDatabase.SaveAssets();
    }
}

The result can render these 4 meshes with 4 different looks as a single draw call.
The meshes are generated & set to static so unity can combine them.
In this screenshot you see 2 draw calls as there is the draw-and-shade and the blit-to-screen call.
Enabling shadows would add a shadow cast and a collect call on top, but subsequent meshes would not increase this count.

PS: The textures I used come from https://freepbr.com/.

Monster Black Hole – Pythius

Pythius makes Drum & Bass and is a friend of mine. So when he told me he was doing a new track and I could make the visuals I didn’t think twice about it!

The short version

The video was made using custom software and lots of programming! Generally 3D videos are made with programs that calculate the result, which can take minutes or even hours. This means that every time you change something you have to wait to see whether the result is more to your liking. With the custom software that we made, everything is instantly updated and we are looking at the end result at all time. This makes tweaking anything, from colors and shapes to animation, a breeze. It allows for much iteration as we want and turns the video creation process into an interactive playground.

The technique we use generates all the visuals with code, there are very few images and no 3D model files. Everything you see on the screen is visualized through maths. As a side effect of not using big 3D model files, the code that can generate the entire video is incredibly small. About 300 kilobytes, 10 thousand times smaller than the video file it produced!

The details

Technologies used are Python (software) Qt (interface) OpenGL (visual effects). The rendering uses Enhanced Sphere Tracing & physically based shading.

I talked about thetool development here and the rendering pipeline here in the past.
More information about advanced sphere tracing here. Which is an enhancement of this!