Real-Time Volumetric Clouds
Darren de Vré
Introduction
Computer graphics is a field that’s constantly evolving, trying to implement new methods to become increasingly more detailed and realistic. From models that keep getting more detailed, textures continually increasing in resolution, lighting so complex that certain scenes are lit using only raytraced light and environments so full of life. Clouds are also becoming a part of this added realism in 3D worlds, but are still relatively new due to their computational expense which only recently has been more accessible in consumer level hardware and techniques (Schneider, 2015).
Clouds are one of those pieces in all forms of art that can elevate an already beautiful scene into a stunning sight due to their wide variety of shapes and sizes. Combining clouds with complex weather systems can leave a player feeling more immersed in the worlds they’re playing in and ultimately elevate the experience.
Some pieces of recent games you might know have implemented these volumetric clouds such as Horizon: Forbidden West and Red Dead Redemption 2. Whilst these games are fairly recent, some other pieces of media such as the fairly recent Elden Ring haven’t implemented these clouds as they are fairly expensive, both in development costs and performance (Schneider, 2015).
Figure 1 – Volumetric clouds in Red Dead Redemption 2 Figure 2 – Volumetric clouds in Horizon: Forbidden West
In this post I’ll be going into what volumetric clouds are, and how I created them by researching and developing.
In order to create my own clouds, I’ll be discussing some points, those being:
- What are volumetric clouds?
- What are my requirements?
- How do I generate noise?
- How do I sample my noise texture using a shader?
- How do I implement detailed lighting?
- How do my clouds fare against current existing implementations?
- What could be added in the future to make the clouds greater?
Contents
- Chapter 1: What are volumetric clouds?
- Chapter 2: Requirements & Methods
- Chapter 3: Noise Generation
- Chapter 3.1: Types of noise
- Chapter 3.2: Noise Texture Generation
- Chapter 4: Creating cloud shapes
- Chapter 5: Creating Detailed Clouds Through Lighting
- Chapter 5.1: Raymarching & Beer’s Law
- Chapter 5.2: Henyey-Greenstein
- Chapter 6: Comparison To Current Implementations
- Chapter 7: Future Suggestions
- Chapter 7.1: Interaction With The Environment
- Chapter 7.2: Weather Map
- Chapter 7.3: More Detail
- Chapter 8: Conclusion
- Sources
Chapter 1: What are volumetric clouds?
Traditionally clouds in video games are represented by a library of 2D cloud images. Whilst this solution can be visually appealing when done correctly, the feeling that the sky contains “real” clouds can fall apart when not enough images are used or the user constantly recognizes the same image (Häggström, 2018, p.1).
This problem can be solved by storing multiple images of the same cloud at different angles and presenting those as the player moves around. This isn’t very data efficient however, seeing as how multiple angles need to be photographed / created for a multitude of clouds. Clouds are also widely varied and look different based on their altitude and climate (World Meteorological Organization). All these factors can cause clouds alone to take up a large chunk of project file size.

We can solve most of these problems by generating the clouds volumetrically, meaning rendering individual points inside of a volume instead of a 2D image or 3d mesh. This technique is also used to create volumetric lights, which can add more depth into scenes. This effect can be compared to light shining through smoke, which effectively creates a path of light that’s completely visible.

Real life clouds essentially work the same, light bounces to and from the water particles in the air which ultimately forms a cloud shape. In a sense, creating volumetric clouds is the same as creating lifelike clouds.

Chapter 2: Requirements & Methods
Before starting researching and developing these volumetric clouds, there are a couple of requirements I want the end result to follow:
- Cloud generation is based on a bunch of user defined variables (e.g. density, color, size). This allows for as much customization for the end-user to create the cloud formations that they want.
- Clouds show cloud-like behavior in the sense that they change shape over time and move as well.
- Clouds should interact with the environment like moving around mountain formations as normal clouds would.
- The end result should be a product that could be released on the asset store.
- The end result should be comparable to current available implementations, such as Unity’s own HDRP version, in both looks and performance.
I started my research and development process by doing research on current implementations and their corresponding papers on volumetric clouds. I decided on splitting up my projects into multiple problems that needed to be solved, these being:
- Generating noise – This random noise is what we’ll be using to generate the shapes for the clouds.
- Visualizing clouds – Use the generated noise to create the clouds visually using a shader.
- Lighting the clouds – Adding lighting calculations to the clouds to calculate volume and representing that color-wise.
I arrived at these problems by looking at the development process behind some papers, specifically: “The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn – Andrew Schneider”, “Real-Time Rendering of Volumetric Clouds – Rikard Olajos” and “Real-time rendering of volumetric clouds – Juraj Pálenik”.
I solved these problems by doing research on each individual topic and modifying current techniques to my needs. On a weekly basis, I received feedback on my current progress from guild meetings. This feedback was in terms of results and techniques and helped me further refine my work.
Chapter 3: Noise Generation
Chapter 3.1: Types of Noise
The typical way of rendering anything procedural, normally, is to use noise functions. This also applies to clouds (Olajos, 2018, p.15). Not every noise function however, is applicable to cloud generation.
Simplex noise:
Simplex noise generates noise that looks comparable to tv static. This type of noise could be used for generation to split up a map into a multitude of ways, such as islands in an ocean. This static-like texture isn’t however very applicable to clouds. Clouds are more clumped together and rather form groups.

Perlin noise:
The most known form of noise is perlin noise. The resulting texture of perlin noise looks something akin to smoke flying around and has points that get progressively whiter / blacker instead of jumping (almost) instantly. This slope of color is handy for cloud generation because it can specify the density of the clouds. The randomness of positioning however still isn’t fully cloud-like due to grouping still missing.

Worley / Voronoi noise:
Worley / voronoi noise is noise that generates cells, comparable to those in biology. These cells are almost one color entirely that gradually goes darker and ultimately meets another cell wall. This type of generation is ideal for clouds, seeing as to how cells are clumped together as groups and they gradually go down in color which can help with visualizing the density. This is also the main type of noise I’ll be using for the generation of clouds.

Chapter 3.2: Noise Texture Generation
Seeing as to how the clouds I ultimately want to generate are 3D, I’ll also need to implement a 3D variant of worley noise instead of the regular 2D noise generation. The end result should be a 3D texture containing cell-shaped noise.
My implementation on this 3D worley noise is based on “Cellular Noise – The Book of Shaders by P. Vivo and J. Lowe” and “Coding Adventure: Clouds by Sebastian Lague”.
Here I use a for-loops to generate cells in 3D space and setting their color value based on their minimum distance to a set point.
float minDistance = 1.0;
for (int offsetIndex = 0; offsetIndex < 27; offsetIndex++)
{
int3 cellCoordinate = samplePositionCellCoordinate + CellOffsets[offsetIndex];
int x = cellCoordinate.x;
int y = cellCoordinate.y;
int z = cellCoordinate.z;
if (x == -1 || x == _AxisCellCount || y == -1 || y == _AxisCellCount || z == -1 || z == _AxisCellCount)
{
int3 wrappedCellCoordinate = fmod(cellCoordinate + _AxisCellCount, (int3)_AxisCellCount);
int wrappedCellIndex = wrappedCellCoordinate.x + _AxisCellCount * (wrappedCellCoordinate.y + wrappedCellCoordinate.z * _AxisCellCount);
float3 featurePointOffset = cellCoordinate + _FeaturePoints[wrappedCellIndex];
minDistance = min(minDistance, distance(samplePositionCellCoordinate + localizedSamplePosition, featurePointOffset));
}
else
{
int cellIndex = cellCoordinate.x + _AxisCellCount * (cellCoordinate.y + cellCoordinate.z * _AxisCellCount);
float3 featurePointOffset = cellCoordinate + _FeaturePoints[cellIndex];
minDistance = min(minDistance, distance(samplePositionCellCoordinate + localizedSamplePosition, featurePointOffset));
}
}
The code for this worley generation is written in HLSL as a compute shader due to the faster efficiency. There are quite a large amounts of points that need to be rendered and calculated and doing this in a compute shader also allows for fast recalculation of noise whilst the application is already running.
The resulting image of a singular-running worley function like this looks like as follows. This means that the worley function works correctly. It isn’t very detailed however, generating clouds based on this image would make them look like floating orbs which don’t look very cloud-like.
Figure 9 – Resulting Noise texture Figure 10 – Orb shaped clouds based off of this noise
This can however be fixed quite easily. By layering multiple worley functions on top of eachother and returning that as a texture, we can get the same base shapes but also add detail to deform them into something resembling clouds a bit more. This implementation is also quite simple to do, by literally adding the multiple layers of noise on top of each other.
In order to have more control over the detail of the layers of multiple noiselayers, I multiply the extra layers with a variable that dictates their weight.
Following that I simply invert the texture so the cells are white instead of black, this is not specifically needed for the calculations but I prefer it this way. At the end I return this texture in only one color channel, the channels being RGBA. By only returning it as one of the 4 channels, I can do the exact same thing 3 more times for the other channels, but with different values. This can effectively give us more detail by using 12 different worley textures. The final result of one of these channels now looks as follows:

Chapter 4: Creating Cloud Shapes
In order to create actual clouds, I need to process the newly created 3D texture. I do this in an unlit shader because this’ll let me process texture and light better. Because our clouds aren’t mesh based at all, I almost completely refrain from touching the vertex shader, with all the work being done from the fragment/pixel shader.
In order to visualize the cloud there are a couple of steps needed:
- Create a “physical” place in the scene where the clouds will be. In this case a gameobject that’ll be read in the shader.
- Sample the texture so that we can read and manipulate its data.
- Calculate a maximum height gradient so we don’t directly create a 3D cell structure.
float sampleDensity(float3 point)
{
//Calculate texture sample size in engine
float3 size = boundsMax - boundsMin;
//Calculate height gradient
float heightAmount = (point.y - boundsMin.y) / size.y;
float heightGradient = saturate(remap(heightPercent, 0.0, 0.2, 0, 1)) * saturate(remap(heightAmount, 1, 0.7, 0, 1));
//Calculate base shape density
float4 noise = NoiseTex.SampleLevel(samplerNoiseTex, ((size * .5 + point) * 0.001f * scale), 0);
float4 normalizedWeights = noiseWeights / dot(noiseWeights, 1);
//Height gradient is multiplied to remove full cell-like structure
float shape = dot(noise, normalizedWeights) * heightGradient;
if (baseShapeDensity > 0) { //Check wether the clouds are actually there and divide them up
return baseShapeDensity * cloudThickness * 0.1; //Mutliply by 0.1 to reduce gel-like look
}
return 0;
}
I start by setting the rays position from the fragment shader, that being the camera position in world space. I use that for creating visual thickness in the clouds. Afterwards I calculate the cube space in which the clouds shall reside by checking the max bounds of the gameobject and specifying at what scale the clouds live in the box with a higher scale meaning more zoomed out clouds.
Afterwards I calculate a maximum height in the form of a gradient. This simply states that new cloud formations are not allowed to happen after a certain percentage of height in the container. If this were not implemented clouds would go on upwards and downwards continually which creates non realistic cloud shapes.
After this I sample the 3D texture in the container gameobject and normalize the user specified weight for each texture channel. This dictates how much the RGBA channels affect the clouds. Calculating the dot product of the sampled texture results in the location data of the clouds in 3D space. I use that result to multiply it with the density (coverage of clouds in the sky) to dictate how much space clouds can take up in the sky.
At the end the only thing needed is to return a thickness of clouds (i.e. how thick they look). By checking what the density in a certain place is I can return 0 if there are no clouds, else a value needs to be returned. Whilst returning 1 is possible, that makes the clouds look extremely gel-like. So by multiplying the density by a user defined thickness value the user can specify the thickness of their clouds. The resulting clouds look as follows:

The clouds currently are black because no lighting calculations have taken place so far. Implementing multiple lighting functions will make the clouds show more volumetric properties.
Chapter 5: Creating Detailed Clouds Through Lighting
With the clouds done shape-wise, they’ll now need to be lit properly in order to create a sense of realism. In this chapter I’ll discuss 3 different lighting techniques I implemented to create that sense of realism.
Chapter 5.1: Raymarching & Beer’s Law
Raymarching is one of the fundamental parts of volumetrics. In short raymarching is shooting a ray from the camera and letting that bounce a multitude of times which “captures” the spots it bounces from. In case of denser volumes, it visually creates objects i.e. clouds (Schneider, 2015).

In order to calculate lighting for the clouds I use 2 variables, the energy of the light which dictates the brightness and transmittance, which’ll be implemented using Beer’s law. More on that later
I wrote the raymarching algorithm by implementing and mixing 2 different raymarching algorithms to fit my needs. These being: “Creating a Volumetric Ray Marcher – Brucks” and “Ray marching – Walczyk”. Combined with the ligtEnergy and transmission techniques from “The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn – Andrew Schneider”.
float4 lighting()
{
const float stepSize = 11;
//Raymarching
float transmittance = 1;
float3 lightEnergy = 0;
float travelled = 0;
while (travelled < limit) {
rayPos = entryPoint + rayDir * travelled;
float density = sampleDensity(rayPos);
if (density > 0) {
lightEnergy += density * stepSize * transmittance * lightmarch(rayPos);
transmittance *= exp(-density * stepSize * lightAbsorptionThroughCloud);
}
dstTravelled += stepSize;
}
return float4(lightEnergy, transmittance);
}
A ray is shot towards the clouds and bounces for a set distance or until there is no cloud left to bounce through. During each of these jumps the original direction and current amount jumped is used for further calculations. Once the density is calculated for the current position the ray is in, an actual lightray is shot through using the lightmarch function. This gets added to the lightEnergy variable which dictates the brightness of the clouds.
float lightmarch(float3 position)
{
float3 lightDir = _WorldSpaceLightPos0.xyz;
float dstInside = rayBoxDst(boundsMin, boundsMax, position, 1 / lightDir).y;
float stepSize = dstInside / lightSteps;
float density = 0;
for (int step = 0; step < lightSteps; step++) {
position += lightDir * stepSize;
density += max(0, sampleDensity(position) * stepSize);
}
float transmittance = exp(-density * sunLightAbsorpsion);
return darkness + transmittance * (1 - darkness);
}
Using the light direction and the current distance the ray has traveled through the box, the user can define an amount of steps the light will take towards that position to calculate colors. More steps translating to more positions to calculate colors meaning more detail. For each of the steps the light takes a new position is calculated and the density at that point is taken into account for calculating how much light it should let through.
Transmission is also calculated in the original lighting function and is implemented using Beer’s law (Schneider, 2015). This transmission value dictates the see-through-ness of the clouds from any given angle depending on the density of the clouds. This is then multiplied by a user given variable to dictate how much light the cloud lets through, both from the top and bottom. This gives the clouds more of a dense feeling by making the edges where the clouds are less thick appear brighter. The result of adding these lighting functions to the shader gives the clouds the appearance of real-life clouds color wise.
Figure 14 – The view of the clouds from above after implementing ray marching and Beer’s law Figure 15 – The view of the clouds from below after implementing ray marching and Beer’s law
Chapter 5.2: Henyey-Greenstein
Whilst the clouds are looking more realistic lighting wise, they can still be improved in regards to sun interaction. When looking at the sun from under the clouds, there currently is no interaction aside from seeing the sun a bit through them.

When looking at the sun through clouds in real life, the sun brightens the clouds around the sun a bit and almost creates godray effects. This is done forward and back scattering the light around the sun. In order to implement this function, I multiply the lightEnergy by a function called the “Henyey-Greenstein phase function” (Schneider, 2015).

Implementing this function isn’t that difficult because no refactoring is needed. To do this I have implemented the Henyey-Greenstein function from “Phase Functions – PBR Book”.
float HenyeyGreenstein(float a, float g) {
float g2 = g * g;
float pi = 3.1415;
return (1 - g2) / (4 * pi * pow(1 + g2 - 2 * g * (a), 1.5));
}
float phase(float angle) {
float phase = HenyeyGreenstein(angle, forwardScattering) + HenyeyGreenstein(angle, -backScattering);
return phase ;
}
Multiplying the lightEnergy with the phase function causes the same scene as figure 16 to look as follows:

Chapter 6: Comparison To Current Implementations
As stated in the requirements I’d like to see my implementation of clouds be somewhat comparable to Unity’s own implementation, which they launched in november of 2021 for the HDRP pipeline. For this comparison I have 2 criteria, those being:
- General looks
- Performance
In terms of general looks I’d say my implementation of these volumetric clouds look on par with those from Unity. They naturally aren’t identical but do share similarities in general shape and color. Unity’s own implementation features clouds going across the entire skybox whilst mine go into a given container. These from unity are however generated the same way and presumably have comparable lighting calculations to mine due to their transmissitance color. Unity’s own implementation can also have a more detailed look due to having more variables and preciser functions, but I’m happy with coming near their implementation.
Figure 19 – My cloud implementation Figure 20 – Unity’s HDRP volumetric cloud integration
In terms of performance I cannot say the same however, when inside a container which is 10000 x 3000 x 10000 units in size I get 30-60 fps in 1440p resolution on my implementation. This is largely due to 2 factors, the size and number of ray marching steps. When simply lowering the ray marching steps from 17 to 7 the fps increases to around the 60-100 fps mark depending on cloud density.
Unity has me bested in this field with their implementation reaching a higher level of detail whilst maintaining more fps. On the same system and resolution their clouds are running at 120-150 fps consistently.
Figure 21 – Performance statistics of my implementation Figure 22 – Performance statistics of Unity’s implementation
Whilst I didn’t create clouds competing with those of Unity’s ultimately I am happy with how close my implementation got to theirs. And even though Unity’s implementation is the winner in visuals to performance I do believe my implementation has some clear benefits, 2 of these being accessibility and customizability. Unity’s volumetric clouds only work on the latest version of HDRP in 2021.2+ my clouds work in URP, HDRP and normal Unity3D from at least version 2020.3.26f1+. In terms of current customizability my implementation has less variables for the user to play around with, but having direct access to the shader means developers can add their own functionality and modify the shader to their own needs.
Chapter 7: Future Suggestions
Even though the project is “finished” / on-hold for now, there are a couple of notable suggestions that could be implemented in the future. Especially because the current project supports adding these functionalities without refactoring the entire codebase.
Chapter 7.1: Interaction With The Environment
Originally I wanted to implement interaction with the environment. Ultimately I decided against implementing this functionality due to two factors, those being:
- Creation of the clouds taking up more time than anticipated.
- Calculating the interaction for a meshless object.
The latter problem, whilst undoubtedly solvable, was too big of a challenge for me to fit into the remaining time frame. If I were to try and tackle this problem I’d possibly take the following steps:
- Create a 2nd texture of the world in the form of a depth texture that only shows black or white depending on if a spot is higher than a set value.
- Overlap this new texture on top of the already existing noise texture to dictate where clouds can form (empty blocks meaning nothing is able to form due to objects in the way).
- Check whether there is a set amount of cloud density next to the empty cell and either not draw clouds at all or move them away from it (giving the feeling the clouds are actively avoiding it).
This effort would however be worse for performance due to the extra textures that are needed to generate, especially in the case that objects aside from terrain would be implemented in the calculation as that would require texture generation in runtime.
Chapter 7.2: Weather Map
A nice addition for creating cinematic environments is the addition of a weather map. This map dictates where clouds can form and where they cannot. In order to implement this you could generate an entirely new map based on noise or draw one yourself and process that image in the shader. The process for implementing this could be compared to that discussed in chapter 7.1.

Chapter 7.3: More Detail
It is possible to implement a second texture for more detail around the clouds. I’d implement this second texture by re-using the noise generation technique for an entire second image, giving another 4 channels to be filled with noise. This noise could then only be applied in areas where cloud density is higher than a specific value to ensure that void would not be filled with cloud detail.
Chapter 8: Conclusion
In conclusion I’m quite happy with the end result I created. Whilst it didn’t completely turn out how I originally envisioned it to be functionality wise, I do think it did turn out really nice visually.

I discovered a lot about shader and lighting techniques in this journey, especially in the field of volumetrics. Knowing the steps needed in these calculations can help in future work on volumetric based shader. I’m also happy with my research and development process and possibilities left for the future of this project. Whilst already functional, this project could also function as a basis for future cloud based projects. Were this shader optimized more, I could also see it being used in actual commercial products.
Sources
- Babić, D. (2018, juni). Volumetric Atmospheric Effects Rendering. University of Zagreb. http://www.zemris.fer.hr/predmeti/ra/Magisterij/18_Babic/Final_0036470256_56.pdf
- Brucks, R. (2016, 16 november). Creating a Volumetric Ray Marcher. Shader Bits. https://shaderbits.com/blog/creating-volumetric-ray-marcher
- Häggström, F. (2018, juni). Real-time rendering of volumetric clouds. http://www.diva-portal.org/smash/get/diva2:1223894/FULLTEXT01.pdf
- Lague, S. [Sebastian Lague]. (2019, 7 oktober). Coding Adventure: Clouds [Video]. YouTube. https://www.youtube.com/watch?v=4QOcCGI6xOU
- Olajos, R. (2016). Real-Time Rendering of Volumetric Clouds. Lund University. https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=8893256&fileOId=8893258
- Pálenik, J. (2016). Real-time rendering of volumetric clouds. Masaryk University. https://is.muni.cz/th/d099f/thesis.pdf
- Pharr, M., Jakob, W., & Humphreys, G. (z.d.). Phase Functions. PBR Book. https://www.pbr-book.org/3ed-2018/Volume_Scattering/Phase_Functions
- Schneider, A. (2015, 10 augustus). The Real-Time Volumetric Cloudscapes of Horizon Zero Dawn – Guerrilla Games [Presentatieslides]. Guerilla Games. https://www.guerrilla-games.com/read/the-real-time-volumetric-cloudscapes-of-horizon-zero-dawn
- Vivo, P., & Lowe, J. (2015). Cellular Noise. The Book of Shaders. https://thebookofshaders.com/12/
- Walczyk, M. (z.d.). Ray Marching. Michael Walczyk. https://michaelwalczyk.com/blog-ray-marching.html
- World Meteorological Organization. (z.d.). Definitions of Clouds. International Cloud Atlas. https://cloudatlas.wmo.int/en/clouds-definitions.html