I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 months, and I wanted to share some of the techniques I had learned in this time.
My previous post dealt with implementing Distance Fog using a Post-Process material.
This time, we are going to explore how to implement a Ground Fog in Unreal 4.
Ground Fog is very important for a variety of reasons.
Here is the same scene as above, without the Ground Fog.

A couple of things you can notice:
- The scene still looks okay, but overall lacks any visual complexity.
- The color of the sand is now too repetitive and dominates the view.
- Less easy to differentiate between the foreground and background, the distance fog that we see helps somewhat though.
So, we can see how Ground Fog can add to the overall aesthetic of a level. Let us now proceed to the implementation itself.

This set of material nodes, is responsible for raycasting forward a certain distance (ML_Raycast), finding a world position and scaling that by NoiseSize.

This world position is then fed into MF_NormalMaskedVector and what is obtained then is a UV value for the moving fog texture, by masking the input WorldPosition with the Vertex normal.

The output of the moving fog is then multiplied (in my case I use Add, it works for this case but might get weird results otherwise), and then you use a if statement to define a World-Z cutoff for the fog.
If the worldZ of the vertex being drawn is lesser than the cutoff value, we multiply the moving fog color value into the post-process output. You can think of this as a simple if-conditional check.
if(PixelWorldZ < CutoffValue)
DisplayFog();
Then, in order to make the fog fade smoothly until the cutoff value is reached, we use another if statement to check the distance between the cutoff Z value and the current pixel world Z value. If the pixel is within the gradient fade range (as defined by GradientRange), then we lerp between the color output of the fog and 1.
if(WorldZCutoff – PixelWorldZ < GradientRange)
LerpBetweenFogColorAndSceneColor();
If the output is 1, we use only the scene color.

The final output of all this is multiplied into the PostProcessInput, and then fed into the Emissive Color.
This material uses the Post-Process material domain. Assign it to a Post-Process Volume, and you should be good to go.
Hope this is helpful to someone!