I was working on creating a day-night cycle using Unity and figured it would be best to generate volumetric cloud data rather than attempting to approximate cloud shading from a sky texture. Hence I googled and searched the Houdini site to find:
Then I attempted to duplicate the Rio system (customer story), which I also slightly had in mind, but seeing their pictures gave me the great tip to not just scatter spheres on spheres, but to offset the secondary and tertiary spheres in Y, so that the clouds actually stack upwards for a much more cloud-like look. Even though this probably isn’t how clouds look from up close, this is what they appear to me from the ground (slightly stylized) which is all I require.
The final idea is to generate game-ready cubemaps with object space normal maps of clouds (with volumetric transparency) as well as a depth map, so when the sun is directly behind a cloud and the normal map is of no influence the cloud still looks properly (1-depth map = translucency). I imagine this in a shader as my current concept to create a good looking day-night cycle.
First step is to produce a decent looking cloud:
(Houdini Mantra render on the left, Composite on a blue background on the right)
I created a sphere to scatter metaballs on, merged it with a metaball the same size as the sphere and converted that to poly’s to base my volume on. Not cloudy at all but at least a wooly base look, then I followed the steps of
in Create clouds using Volumes. But I’ll go more into detail below.
The IsoOffset has the output type to be set to SDF Volume and it requires Invert Sign to be set to true, this is a tickbox in the Construction tab! I started playing with the offset to see something, but coming to the volume mix this appeared unnecessary, so just leave it at zero.
After copying the formula given at the houdini howto I did increase the Uniform sampling to 30.
Now for the rendering I just added a top-down distant light (from the Lights and Cameras shelf) and created a default camera to frame my cloud, paying attention to the volume box not the volume. DO use ray trace shadows, ignore the remark about depth maps as to me it decreases quality and speed with default rendering.
Drag on the Billowy Smoke shader, then in the out I add a default mantra node and in the render view I set the render node / camera and hit Render.
Now for the cloud look all we have to do is adjust the geometry, for the ambient color we can just adjust the billowy smoke shader’s shadow density to something like 1.3. The smoke density can be adjusted to have more or less chunks separated on the edges, when increasing samples the smoke density should also increase to make the samples in fact count and be visible, it can also create a tighter/more cartoony look, but you can come back to this at any time.
So looking at the Rio image we can see a distinction between green (base spheres) and red (small secondary spheres). So I decided to scatter 4 or 5 points on a unit sphere, and copy more smaller spheres onto that. Editing the first sphere to have radii (1, 0.6, 1.2) to get a slightly flatter result.
Now instead of scattering points onto the sphere, I first create a combined mesh by copying metaballs using the spheres as template points (meaning that the metaballs come in the same volume as the input, to achieve this the metaball weight must be set to a high value; I use 100).
Then I can scatter onto that so that there are no spheres on the inside. Metaballs need to be converted to polygons, settings the level of detail U, V to 1, 1. Then again scattering 4 pts per area and copying smaller spheres on top, merging with the initial spheres, gives me a better cloud already.
But it is still much too generic and blobby. After repeating the above process for even smaller spheres didn’t work I decided to randomize my secondary sphere radius, copy stamping the template points. You can see the difference between the previous (left) and this step (right) that the cloud has larger creases now.
This randomness is what I wish for, so the next step is to add a third iteration of spheres. First I convert the previous step to a subnet, so everything between the first copy node and the merge at the end becomes a subnet, then I append a copy of that subnet to itself for tertiary sphere. Inside the copy the sphere radius is scaled down (previously I randomized tpt*0.2+0.2 and now I randomize tpt*0.1+0.1). The problem is however, that the metaball copy in the second subnet does not match the sphere input, because the spheres have a varying radius. The trick is to add an attribute, pointradius, to the point level of the spheres. I do this already outside of the subnet at the first sphere so that I can keep both subnets the same, then the metaball in the subnet will get the point(input,tpt,pointradius,0) as radius and the copy node output matches the sphere input again.
Then with the two subnets converted to a volume again, setting the second sphere radius to a smaller number and the scatter per area amount to a higher number creates this result (I did edit the shader to have a volume density of 30 and a shadow density of 1.3 now):
Increasing the isooffset samples to 100, the volume mix formula to clamp($V*4, 0, 1) – just editing the multiplier to something much lower – and the billow smoke smoke density to 30 gives me somewhat what I want.
As a final note it is also possible to set the isooffset sampling to non square, as for some clouds I got strange stretching in the noise, this can be solved by manually calculating or inputting a more suitable amount of samples, and/or oversampling the largest axis compared to the other axes.
On the left the previous result with diagonal lines disturbing the look, on the right the non-square sample setup with this problem solved.