Required (or at least highly recommended) reading:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm
https://www.alanzucconi.com/2016/07/01/signed-distance-functions/#introduction
https://9bitscience.blogspot.com/2013/07/raymarching-distance-fields_14.html
http://blog.hvidtfeldts.net/index.php/2011/06/distance-estimated-3d-fractals-part-i/
Preamble:
This article assumes that you are at least somewhat familiar with how the Material Editor in Unreal 4 works and have a working knowledge of HLSL, in that I won’t be stopping to define a lot of concepts beforehand.
Nothing except for the math here is really that complicated though, and if you have any experience with a game engine in general, you should be fine. (and the math can be ignored and the formulae used as is)
You should understand how the custom node functions, at least somewhat if not in entirety, and more importantly, how to use it.
This would be a good point to check out Inigo Quilez’s article on signed distance fields or this article on how to raymarch using them.
Also good is this article on distance estimated fractals.
In case you are having trouble understanding how to implement the raymarching, this series of articles by Alan Zucconi in Unity is a great primer and is how I first got to grips with the subject.
This is what your material setup should look like in order to get the raymarching of SDFs to work (set your shading model to ‘Unlit’ when you create the material):
Code in the ‘Raymarch’ custom node is here.
The ‘SphereSDF’ node should have the HLSL version of this function, which means vec3 becomes float3.
The argument ‘p’ is the ‘WorldPosition’ argument of the raymarch node or AbsoluteWorldPosition you will be passing in from the Material graph.
There’s some stuff with local and world space to deal with that’s under ‘Coordinate spaces and conventions’.
Making material SDFs:
What I did at first was to wrap all these SDFs in custom nodes in Unreal, so that I could plug them in and out of custom nodes along with their inputs in order to pass them on.
How custom nodes function makes it necessary for the smaller SDF functions and their inputs to be plugged into the base raymarcher function, so that the HLSL translator can pick up all of them and put the function calls in the right places.
Let me be clear, this is bad and not a sustainable way to work, and not just because how the translator names these functions is entirely opaque and liable to change.
It is an awful limitation on a language that is ostensibly supposed to be a visual coding language, as it makes the act of plugging in a distance field to use as a shape (the ideal way I envisioned this working) become arduous in comparison to just typing out the code instead.
It won’t scale to more complex models as it is currently, of that I am convinced, but as a learning experiment to understand how SDFs and raymarching work its still okay.
I figured that in case you were looking forward to modelling with SDFs right out of the box using the custom nodes I’d head your expectations off at the pass, because the material editor makes achieving that type of functionality pretty hard.
I figured out a half-solution later on, it’s located after all the SDF math.
Coordinate spaces and conventions:
For converting Inigo Quilez’s SDF functions into versions usable with transforms and world space objects, keep in mind that all of his functions assume the object is centered, and where he uses length(Position), instead you would use either length(PixelWorldPosition – ActorPosition) or distance (PixelWorldPosition, ActorPosition).
The SDFs seen here use the convention of negative values being within the object and positive values failing the distance test, in other places you might see this trend reversed so its something to keep in mind.
Boxes:
A trick to simplify SDF for objects symmetrical about their center (like box and rounded box) is the use of abs(PixelWorldPosition) means that evaluating positions both negative and positive with relation to the center of the object can be treated the same, this simplifies quite a bit I think.
The subtraction of the box dimensions from the position being tested will determine whether the point is in the box or not, as if it is negative, then the point MUST be within the volume, as a positive result means that the point is outside the box.
It’s simple vector math that’s going on here, addition is shifting a vector along another vector, and subtraction is shifting the second vector in the opposite direction of the first vector.
The result is then fed into a max-op with a zero vector to eliminate negative values from the length check, as negative and positive values of the same vector will give the same length, but if its negative we already know its inside and hence don’t care what the actual value is.
The max is just to isolate any positive values, and then a length check is done returned to the raymarching function, compared against the minimum distance (Alan’s tutorials define this as distance-aided raymarching), and if lesser than that, the point is within the box.
Torus:
Changing the swizzled operators of p.xz and p.y here to a different set of axii, would cause the torus to be aligned along a different axis.
P.xz determines which axii the torus extends along, P.y determines the axis of torus thickness.
Subtracting t.x from P.xz gives the external radial boundary, meaning t.x is the parameter that controls the outer radius of the torus. The length of p.xz and the value of p.y needs to be such that the resulting length of q is lesser than t.y (the internal radius), in order for it to be in the SDF.
The order of the elements that form q is irrelevant as we take the length of that vector.
Subtracting t.y from the final result gives the thickness of the torus.
It would seem that by combining any two axii and their vectors and taking the length of the resultant vector, we bias the SDF towards including points along those axii.
This same type of axial combination is used for cylinders too.
Cylinder:
This formula seems to be in a bit of error as changing values of c.xy dont affect the cylinder shape, but rather cause an offset of it from the center. They can be eliminated entirely as a translation can more easily/transparently be achieved by adding an offset to the position input.
As such the only parameter that affects the cylinder shape is c.z, which changes the radius of the infinite cylinder.
Similar to how the axial combination was seen in the previous SDF, p.xz affects which axii the cylinder is oriented along.
A consistent thing you can notice is that subtracting a single float value from a distance output of a primitive causes a shape of somewhat spherical nature to emerge from the SDF.
Cone:
Here the two axial combination of p.xy doesn’t determine which way the cone is pointed (as you might expect, or at least as I did), but rather which plane the base of the cone is aligned with.
Instead, p.z determines which axis is used for the tip of the cone to point to.
c.x controls the width of the base, c.y controls the narrowness of the tip.
I think the dot-product is used for component-wise multiplication here as opposed to any particular geometric purpose.
Plane:
The plane formula is straightforward, it prevents values from one axis to be accepted into the SDF, and the other two axii are what the plane is aligned to.
The plane normal being (0, 0, 1) for instance will only allow values on the xy plane in the SDF (as seen in the example).
The n.w bit is for a plane offset in/opposite to the direction of the normal.
Hexagonal Prism:
As before with the boxes, the abs(Position) is used to exploit the symmetry of this shape about its center.
h.x changes the radius of the hexagon.
h.y changes the thickness of the prism.
The q.z value is what controls the thickness axis.
The second set of values, q.x * 0.866025 + q.y * 0.5 are what control the shape of the hexagon, with the q.x * 0.866025 changing the width of the hexagon and q.y * 0.5 changing the narrowness of the hexagon.
Manipulating the q.y can yield shapes like a diamond if you increase the scalar its multiplied against.
As far as I can tell the numerical constants are magic numbers that yield the correct shape of the hexagon.
It is beyond me how anyone figured those constants out in the first place (excepting trial and error), but that’s math for you.
The q.y value controls the up vector of the hexagon. Multiplying it with a scalar < 1 decreases the height and vice versa for a scalar > 1.
Changing the axii around will change the orientation of the hexagon.
A bit later on we’ll discuss how maximum and minimum operations can be used to create intersections and unions of shapes respectively, but this is a good opportunity to see their mathematical use in an SDF.
The max here (and also in the box SDFs, but this aspect of them is more transparently visible here) is an intersection that acts as an upper bound to the values that are accepted into the SDF. Take the first max away and the hexagon becomes an infinite hexagon along the thickness axis.
Take the second max away and you’re left with an infinite diamond.
Triangular Prism:
This is very similar to the hexagonal prism except that the second max operation uses the original position as opposed to the absolute value of it.
Capsule/Line:
This is a strange one, requiring 4 inputs unlike most of the others which need only 2 or 3.
The input a controls where the capsule starts, and can also serve as an offset for the capsule.
The input b controls where the capsule ends and controls the direction in which the capsule extends.
Controlling the axis along which the capsule is oriented is just a matter of changing the CapsuleStart and CapsuleEnd, as opposed to any change in the axial combinations within the actual shader math.
The primary function of this SDF seems to be create a line as opposed to a capsule, and the capsule is a neat extra that you can gain by subtracting the single float value ‘CapsuleRadius’ at the very end (remember what I said about how subtracting a single float value gives you shapes of spherical nature?)
To compare this to another geometric operation, this seems similar to sweeping along a line with a certain radius, resulting in a capsule.
pa would give you a local vector with respect to the starting of the capsule, and ba would give you the non-normalized direction vector of the capsule, a line along which points would be accepted.
the calculation of h is what seems to control the actual “sweeping”, in creating a ratio between the dot products.
I’m not 100% certain why the ratios of dot products is being used here, but I’m under the impression that it constrains values that are out of the range of the capsule SDF. Any value that is beyond the SDF would result in the clamp giving the extreme values of 0 and 1.
In turn the h value is used in the next line, and if its at (or close to) the extreme values of 0 and 1, it would result in the length being a large positive value, which would mean the tested position is out of range of the SDF.
As it may be evident, math is hard, and I’m no mathemetician so I can only guess at the purpose of these operations and hope that my conclusions are accurate. I would welcome any feedback or corrections if I made a mistake, so feel free to provide them.
The clamp is what confines the SDF at the ends to taper to the capsule tips. Without it the result is an infinite cylinder.
Capped Cylinder:
Similar to the infinite cylinder case we start with the p.xz two-axial combination here, but then we also use the abs to exploit the symmetry of it about its origin.
Changing around the axii in p.xz and p.y changes orientation of the cylinder. P.xz is the radial axis, and p.y is the up axis.
h.x controls radius of the cylinder, h.y controls the height.
The vector d is formed from a series of straight line distance checks.
The xz plane is the plane of the radius, and the length of the vectors on it is found, and compared against the desired radius, the same happens for p.y on the up axis with h.y. If the result is <= 0, it would probably be within the SDF, but there are more ops to do still before we can know for sure.
The min & max ops don’t seem to be necessary here? Removing them didn’t affect the shape of my cylinder at all.
the max of d with 0 is to remove any negative values, as those would be within the SDF anyway but would return a large positive value when the length is taken, similar to what happened with the box SDFs. (thus invalidating a correct point which is actually within the SDF)
Capped Cone, Triangles, Quads:
I couldn’t really understand this one, and I couldn’t get it to work either. Let’s say its been left as an exercise to the reader.


Second Iteration:
Making models more complicated than the basic primitives would be very tedious considering that for every shape that you needed, you’d have to plugin a bunch of inputs to the main raymarching node in the right order which wasn’t straightforward to see and liable to change if you needed to change the order in which your shapes were input for some reason. It just wasn’t practical to do.
In this picture, each custom node has been wrapped in a Material Function, which does help in making them more reusable, as those are now accessible to any material, but you still have the issue of plugging inputs in a fragile compiler dependent order.
Doing these by code would be far faster, which defeats the purpose of using visual programming.
When I was looking for a solution for these problems I stumbled across this little gem in his blog where Ryan Brucks said that it was possible to insert HLSL functions that could be called from custom nodes.
I tossed all the SDFs (with HLSL functions wrapped around them) into ‘Common.ush’ and then was able to call these functions from the custom nodes.
This means that the material functions created earlier could serve as an in-engine documentation for the SDFs, and you can then call those SDFs directly with code in the custom nodes, removing 1 pin you need to connect for every SDF (the pin of the SDF custom node/material function) from the process.
The other issue of still having to plug in the inputs for these functions you call in code, as opposed to having all that bundled into a single input, still exists.
However this method allows for a single generic raymarch node to be used with inputs added in as needed as opposed to in the previous version where I tried to make raymarch nodes generic by how many inputs were needed by a single SDF function, which resulted in way too much spaghetti.
It also makes the process of compositing different SDFs into a single more complex shape, more feasible and scalable.
Of course the solution is now a half visual programming and half coding based approach, but there are no free lunches.
Distance Operations:
Now that the more elementary shapes are out of the way, we can start to do more cool stuff with the distance fields, like manipulate them with other operations in order to add, subtract and intersect them.
How these are supposed to be used is fairly self explanatory, the distance ops take as arguments the results from the distance fields and return the result of the respective operations they perform to the raymarcher.

The math behind them is also fairly straightforward:
Union:
The union acts as a lowerbound using the minimum, and hence biases the output towards the more negative values, meaning that, for example:
In the case of a sphere of radius 6 and a cube of side 4, both centered at the origin, the point (0, 5, 0), the result of their SDFs for this point would be -1 and 1 respectively, the union would use the minimum value and hence accept the sphere output of the SDF instead of the cube output. Hence BOTH the sphere and the cube are rendered where previously only one would be.
Intersection:
The intersection is the opposite of the union where the maximum is used instead and biases the output towards more positive values, so where the union would accept the volume generated by both volumes, the intersection only accepts the most conservative estimate of both volumes, i.e. the regions that are DEFINITELY within the SDFs of both volumes.
Subtraction:
Subtraction could be said to be a special case of the intersection where by negating the values of one SDF you bias the output towards accepting the values of only one output and completely excluding the other, which gives you subtraction of the volumes.
Domain operations:
Repetition:
This was an operation I REALLY wanted to learn because who doesn’t love infinitely repeating M.C. Escheresqe stuff life this?:
An issue I ran into though is that the GLSL mod and HLSL fmod aren’t completely equivalent, which is very nicely explained in this article.
I was able to perform the repetition, to a degree, but had weird artifacts like this:
To fix this, I added another function to the ‘Common.usf’ for a replacement of the GLSL mod and used that instead, which fixed the positioning errors in the repetition, but still had issues with the raymarcher for some reason:
The 9bitScience article had this warning about operations that act on the input position of the SDFs:
Which meant that by multiplying the result of the distance field with a scalar < 1, increasing the steps the raymarcher takes, and increasing the precision of the accepted distance of the raymarcher, I could finally get the results of the repetition operation correctly:
The edges of some of the boxes at the edges are still raggedy though and that stumped me for a bit until I stumbled across this video:
This video showed me that the repetition function shown on Inigo’s article might be wrong or might not work in Unreal’s setup for whatever reason as at about 18-19 mins in he shows the implementation they use in their tool which includes a ‘+ 0.5 * spacing’ as a compensation to the domain position being supplied.
This finally fixed everything for me.
Rotation/Translation:
Translation isn’t a big issue considering that the raymarching is already attached to an object position and hence moves with the actor anyway.
There’s two issues with implementing the rotation operation though.
One is that matrices can’t be provided as input in the material editor, though I have seen some material functions for transforming vectors that take individual basis vectors which might work if you construct the matrix in code, HLSL documentation for which you can find here.
The other issue is that unlike GLSL, HLSL doesn’t have a built in function for determining the inverse of matrices, and has a transpose method instead, which if the model matrix is orthogonal would work but that isn’t the case if the model matrix encodes a translation or scaling, as those do not preserve orthogonality.
This Stack-Exchange answer explains nicely why the transpose wouldn’t work for a model matrix.
However, I managed to hack in a limited amount of rotation functionality using some HLSL shader constants that are set per material/object before the shader is evaluated.
These are called ‘Primitive uniform shader parameters’ in Unreal and for people more familiar with GLSL (as I am), these are more simply called ‘uniforms’.
The documentation for them is decent though and lists all the ones you can use and a short comment about the purpose of the parameter.
The one you need here is ‘Primitive.LocalToWorld’, a matrix variable that contains the model transforms we need, in particular the rotation transform.
For more information on matrix transforms, this link has been my go-to reference for years now.
Keeping in mind that scaling can mess up distance fields (as its an operation that doesn’t preserve vector length), you also need to set the scale values in the model matrix back to 1 so that they don’t affect the SDFs you use them on.
After that its just a matter of multiplying it against the position input for the SDF you want to be rotated, and you have rotation of the SDF when you rotate the object!
The rotation doesn’t seem to play well with the raymarcher extreme angles (as can be seen in the gif below), it also seems to not work well with the domain repetition operation if you rotate stuff beyond a couple of degrees.

Using SDFs with non-Euclidean norms:
These can be implemented pretty easily, just make a new length function that also takes a ‘norm’ float argument in ‘Common.ush’ and you’re good to go, you can call it from other SDF functions that you add for the shapes using the new norms.

The inverse multiplication inside the primitive call confused me, but I read on a forum thread on pouet that it is to counteract the scaling of resultant distance which messes with the distance field. The reason for this being rotation and translation preserve the length of vectors, but scaling does not and Inigo mentions this in his article also.
One issue with this operator is that its hard to generalize, because it needs to be applied to both the inside and the outside of a primitive call, for each distance field operation. Thus it needs to be invoked on an individual basis for each SDF as opposed to having a function to call to do it a single time.
Deformation operations:
Displacement should be obvious, it serves to distort the primitive SDF in some manner, so I think its just a matter of trying out different functions and seeing what works for your intended look.
In the image above I’m using the displacement function Inigo mentions in his article.
The Blend operation’s purpose seems to be to eliminate the discontinuities from the union operation, but more simply it’s a function that allows for smooth interpolation between SDFs where the union would do a hard joining operation.
Twist and Bend:
The problem with these last two operators is that they aren’t giving the expected results, even with changes to the constants being used and changes to the axii to account for the different coordinate systems.
What I get are odd smeary results from the twist operation:
And results that kind of look like they’re working at some viewing angles but fall apart at others from the Bend operation.
Will keep tweaking and update this post if I get them to work right, but for now that more or less does it.
Feel free to reach out to me @nightmask3 on Twitter!
Thanks for reading!