Technical Art Demo Reel

 

A compilation of work I’ve done using Blender, Houdini, Adobe Creative Cloud, Quixel Suite, xNormal and Unreal 4.

What follows is a breakdown of what’s happening in each shot:

1) Interactive Electricity/Lightning effect:

After watching the Unreal Dev Days talk given by Alan Willard, the Senior Developer Relations Tech. Artist at Epic:

I wanted to try to replicate the effect as it depends on the interaction of all the systems of Blueprints, Materials, lights, particles and sounds in order to achieve the final effect, something I’ve never done before, and seemed like a good challenge to improve my knowledge of Unreal.

For my implementation I left out the sound but replicated everything else, as far as I can make out. I was particularly impressed by the range of options that the system provided to designers in order to tweak the effect however they want, and especially how the same asset could safely and effectively be used in multiple scenarios to achieve a variety of different visual compositions.

The system itself isn’t really that complicated once you break it down either, which is all the more impressive to my mind. It just involves raycasting in a random direction, spawning a new electric arc and particles if that cast hit something, and repeating ad nauseum.

Most of the magic really happens inside the Material, which doesn’t utilize any textures at all in order to produce the noise in the electricity, and instead uses overlapping Fast Gradient Noise at different levels of scale and tiling to produce the distortion, from the inbuilt Noise node.

The electric arc itself is a spline with a simple cylinder mesh chosen to be stretched along that spline, and the electricity Material is applied to that cylinder. This bit is accomplished using Unreal’s ‘SplineMeshComponent’ functionality.

When an arc is spawned the starting location is the origin of the spline and if the raycast hits something, the impact location is set as the end of the spline, which is then updated along with its mesh.

2) Interactive fluid simulation using Materials, Blueprints and Render Targets:

This effect relies on the interaction of several systems:

  • Material system for fluid simulation – The code for the fluid simulation itself happens within the node based shader system in Unreal. It uses the Shallow Water Equations method published by Jos Stam in order to minimize the complexity of the fluid simulation code and essentially reduce the problem from 3 dimensions to 2.5 dimensions. The basic idea is that the water is treated as a 2D grid where each grid point has a height value, and any changes in height value at any point is distributed to the nearby grid points.
  • Blueprints system for interaction –  The system can respond to physics objects that collide with the water plane, and this is enabled by having the Blueprint actor communicate a change in water height to the Material system whenever there is a collision with the water plane.
  • Render targets – The output of the Material system is fed to a render target which is basically a dynamic texture that updates every frame. Render targets are also used to feed the output of a frame into the Material system so that the next frame can be calculated on the basis of the previous frame.

3) Python tool for animating cinematics:

A tool that allowed the user to create manipulators bound to different properties, that they could then animate in the Sequencer tool. This allowed access to lower level C++ properties or animating properties inside of structs, something that base Unreal does not provide.

It was also built into an editor interaction mode of its own and allowed the user to pick the actor to animate using an eyedropper tool.

The tool also provided a number of convenience features and modes for the manipulators in order to change the way they moved, interacted and controlled the property they were manipulating and allowed for the user to select different components of the manipulator quickly.

4) Caustics Generator:

I was working on a light study of Blade Runner 2049, of the scenes inside Wallace Corporation, a screenshot of which I’ll include:

LuvOffice

In order to achieve a setup similar to this, I figured I’d need to learn how to generate caustics.

There’s a couple of ways to achieve this that I read of:

1) Jos Stam’s method to generate periodic caustic maps: https://www.opengl.org/archives/resources/code/samples/mjktips/caustics/

This method looked interesting and like most of Jos Stam’s work is seminal, but is a bit outdated for current quality requirements.

2) Realtime caustics:
View at Medium.com

This method seemed more feasible and delivered higher quality, but it still didn’t meet the bar for what I wanted in this scene for two reasons:

1) It seemed like the final quality is very dependent on the resolution of the wavefront mesh, surface mesh and the grid plane you are projecting it onto, which is definitely not ideal for a realtime environment where you might want results that could even hold up in 4K and cinematics and so forth, which I think this method could not scale up to, even with GPU computation.

2) The method doesn’t seem to allow for much tweaking or changes to be achieved in the final look, limiting its potential as an actual technical art tool.

I kept searching until I stumbled across the tech demo presented by Ryan Brucks, principal technical artist at Epic, during GDC 2017, where he demonstrated a method in which he baked the results of the caustic simulation into a flipbook which could then be used at runtime with a much lower cost.

I had no idea how to achieve results like that in Unreal, but I had also seen another video where a Houdini user demonstrated something remarkably similar:

I put two and two together, and figured that I could probably generate the simulations and flipbooks in Houdini, import them into Unreal, and then play those flipbooks back in a Material and I’d have the caustics as I needed them.

I reached out to the Houdini user on a forum post and asked them about their method and they were very helpful in describing which nodes to use and what the principle behind the method was.

After a week or two of hacking away at Houdini I had what I wanted, a tool that let you generate caustics and then bake out the flipbooks.

Doing this in Houdini, has three advantages that I see:

1) Can bake out the flipbooks at any resolution, even really large ones that Unreal cannot usually support Render Targets for without causing massive slowdowns and sometimes crashes.

2) Can use different types of input noise to generate the caustics and change a bunch of parameters and options to tweak their final look before baking.

3) Method is exactly the same as the one mentioned in the Realtime caustics blogpost, but because its carried out offline, I could increase the mesh resolutions as much as I wanted until the quality met the bar I had in mind.

I also did a bit of work after this in order to make the flipbook textures tileable using information from this blogpost, also by Ryan Brucks:

https://www.shaderbits.com/blog/tiling-within-subuv-or-volume-textures

This gave me tileable caustic flipbooks that could be animated within Materials using a ‘Flipbook Animation’ or ‘SubUV_Function’ node and then used as a Light Function Material or a simple Surface Material or even as a Decal Material if you wanted.

5) “The Wanderer” character model:

I’m currently working on my own game, the working title being ‘Project Gilgamesh’.

It’s a third person parkour platformer, and I wanted to take the opportunity to learn how to make a next-gen character model that utilized Unreal’s newer cloth/rigid body physics and anim dynamics features that were introduced after 4.16.

The character modelling was done in Blender, and was based in large part on these pieces of concept art for the Dr.Strange movie:

There’s also some influence from the Adeptus Mechanicus from Warhammer 40K:

2771c3874a191ac855a770f46de3d60c

UV mapping was also done in Blender.

The high poly normals were baked down in xNormal before being imported into Quixel.

Texturing was done using Quixel Suite and Photoshop CC. All the textures were worked on at 4K and then rendered down to 2K at export.

The final skeletal mesh had 4 material slots, meaning 4 draw calls for the player per frame. I might add one more for all of the emissive points on the player like the eyes, lights from the gas mask etc.

I remember reading somewhere that Unreal recommends staying within 3-5 draw calls/materials per object, though I could not find that link again for a citation so it might be hearsay, take with a grain of salt.

The character’s poly count is a little bit on the higher side, even with current gen hardware in mind, but my reasoning behind this is that I’m not going to have many (if any) other characters that will need this level of detail by far.

Most other models will either be environmental models, props and maybe a few robotic enemy types.

The base biped model was rigged in Mixamo, brought back into Blender, skinned with clothes, gas mask and belt and then imported into Unreal.

Animations that were applied to the rigged unskinned model in Mixamo could then be imported into Unreal and worked right away on the skinned version as their skeletons were the same.

Cloth was animated using Unreals new Nvcloth realtime solver and cloth toolset which allows you to paint values for the cloth simulation onto the mesh directly in the editor.

This saves a HUGE amount of time compared to the APEX cloth program that was released by Nvidia, which was also rather complicated and obscure without a lot of up-to-date documentation either.

However there are never any free lunches and there are still some drawbacks to using this cloth method, such as how more complicated cloth setups like thick cloth or multi-layered cloth setups would not be possibly with the toolset as it is currently.

Capture

Also there’s the obvious drawback of how real time cloth is never really going to look as good as cloth that is simulated offline. Not for a very long time at least.

On the whole though I think this toolset is worth learning, if you can work around the problems with it and/or they don’t matter to your use case.

The character isn’t finished yet, but its close. I want to add other things like an additional belt strap that is attached on two sides, which I’ll probably try to do with rigid bodies or anim dynamics, which allow you to drive additional bone/socket motion on top of the character’s skeletal animations.

6) Volumetric fog using custom node and HLSL:

A demonstration of a raymarched volumetric fog effect I’ve been working on for a while.

Based on the ShaderToy shader found here:
https://www.shadertoy.com/view/XtfSWX

The effect raymarches forward a certain distance, samples that point in world space, uses it to generate some noise (triangle noise is what the author of the shader on ShaderToy called it), and adds a specified color to it, which is blended with the scene texture based on the scene depth, so that nearby objects are still visible/not occluded.

Does face some issues with Translucency (always troubling in a Deferred rendering setup) and can be solved by placing the post-process effect before the Translucency step in the pipeline, at the cost of a performance hit.

Uses the Custom HLSL shader node in the Materials system.

I was working on this before Epic shipped their own volumetric fog solution. You could probably achieve the same effect using volumetric particles and their global fog but I haven’t attempted to port it or experiment with that so far.

7) Python tool to render curve data as a spline:

A tool that allowed the user to visualize their FBX animation data in the form of a spline curve. It utilized Unreal’s built in spline component.

Advertisement

Raymarching using Signed Distance Fields in UE4

ezgif.com-optimize.gif
Required (or at least highly recommended) reading:

http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm

https://www.alanzucconi.com/2016/07/01/signed-distance-functions/#introduction

https://9bitscience.blogspot.com/2013/07/raymarching-distance-fields_14.html

http://blog.hvidtfeldts.net/index.php/2011/06/distance-estimated-3d-fractals-part-i/

http://mercury.sexy/hg_sdf/

Preamble:

This article assumes that you are at least somewhat familiar with how the Material Editor in Unreal 4 works and have a working knowledge of HLSL, in that I won’t be stopping to define a lot of concepts beforehand.

Nothing except for the math here is really that complicated though, and if you have any experience with a game engine in general, you should be fine. (and the math can be ignored and the formulae used as is)

You should understand how the custom node functions, at least somewhat if not in entirety, and more importantly, how to use it.

This would be a good point to check out Inigo Quilez’s article on signed distance fields or this article on how to raymarch using them.

Also good is this article on distance estimated fractals.

In case you are having trouble understanding how to implement the raymarching, this series of articles by Alan Zucconi in Unity is a great primer and is how I first got to grips with the subject.

This is what your material setup should look like in order to get the raymarching of SDFs to work (set your shading model to ‘Unlit’ when you create the material):

12.PNG

Code in the ‘Raymarch’ custom node is here.

123.PNG

The ‘SphereSDF’ node should have the HLSL version of this function, which means vec3 becomes float3.

41.PNG

The argument ‘p’ is the ‘WorldPosition’ argument of the raymarch node or AbsoluteWorldPosition you will be passing in from the Material graph.

There’s some stuff with local and world space to deal with that’s under ‘Coordinate spaces and conventions’.

Making material SDFs:

What I did at first was to wrap all these SDFs in custom nodes in Unreal, so that I could plug them in and out of custom nodes along with their inputs in order to pass them on.

How custom nodes function makes it necessary for the smaller SDF functions and their inputs to be plugged into the base raymarcher function, so that the HLSL translator can pick up all of them and put the function calls in the right places.

Let me be clear, this is bad and not a sustainable way to work, and not just because how the translator names these functions is entirely opaque and liable to change.

It is an awful limitation on a language that is ostensibly supposed to be a visual coding language, as it makes the act of plugging in a distance field to use as a shape (the ideal way I envisioned this working) become arduous in comparison to just typing out the code instead.

It won’t scale to more complex models as it is currently, of that I am convinced, but as a learning experiment to understand how SDFs and raymarching work its still okay.

I figured that in case you were looking forward to modelling with SDFs right out of the box using the custom nodes I’d head your expectations off at the pass, because the material editor makes achieving that type of functionality pretty hard.

I figured out a half-solution later on, it’s located after all the SDF math.

Coordinate spaces and conventions:

For converting Inigo Quilez’s SDF functions into versions usable with transforms and world space objects, keep in mind that all of his functions assume the object is centered, and where he uses length(Position), instead you would use either length(PixelWorldPosition – ActorPosition) or distance (PixelWorldPosition, ActorPosition).

The SDFs seen here use the convention of negative values being within the object and positive values failing the distance test, in other places you might see this trend reversed so its something to keep in mind.

Boxes:

1.PNG
A trick to simplify SDF for objects symmetrical about their center (like box and rounded box) is the use of abs(PixelWorldPosition) means that evaluating positions both negative and positive with relation to the center of the object can be treated the same, this simplifies quite a bit I think.

The subtraction of the box dimensions from the position being tested will determine whether the point is in the box or not, as if it is negative, then the point MUST be within the volume, as a positive result means that the point is outside the box.

It’s simple vector math that’s going on here, addition is shifting a vector along another vector, and subtraction is shifting the second vector in the opposite direction of the first vector.

The result is then fed into a max-op with a zero vector to eliminate negative values from the length check, as negative and positive values of the same vector will give the same length, but if its negative we already know its inside and hence don’t care what the actual value is.

The max is just to isolate any positive values, and then a length check is done returned to the raymarching function, compared against the minimum distance (Alan’s tutorials define this as distance-aided raymarching), and if lesser than that, the point is within the box.

Torus:

1.PNG

Changing the swizzled operators of p.xz and p.y here to a different set of axii, would cause the torus to be aligned along a different axis.

P.xz determines which axii the torus extends along, P.y determines the axis of torus thickness.

Subtracting t.x from P.xz gives the external radial boundary, meaning t.x is the parameter that controls the outer radius of the torus. The length of p.xz and the value of p.y needs to be such that the resulting length of q is lesser than t.y (the internal radius), in order for it to be in the SDF.

The order of the elements that form q is irrelevant as we take the length of that vector.

Subtracting t.y from the final result gives the thickness of the torus.

It would seem that by combining any two axii and their vectors and taking the length of the resultant vector, we bias the SDF towards including points along those axii.
This same type of axial combination is used for cylinders too.

Cylinder:

Capture.PNG

This formula seems to be in a bit of error as changing values of c.xy dont affect the cylinder shape, but rather cause an offset of it from the center. They can be eliminated entirely as a translation can more easily/transparently be achieved by adding an offset to the position input.

As such the only parameter that affects the cylinder shape is c.z, which changes the radius of the infinite cylinder.

Similar to how the axial combination was seen in the previous SDF, p.xz affects which axii the cylinder is oriented along.

A consistent thing you can notice is that subtracting a single float value from a distance output of a primitive causes a shape of somewhat spherical nature to emerge from the SDF.

Cone:

Capture.PNG

Here the two axial combination of p.xy doesn’t determine which way the cone is pointed (as you might expect, or at least as I did), but rather which plane the base of the cone is aligned with.

Instead, p.z determines which axis is used for the tip of the cone to point to.

c.x controls the width of the base, c.y controls the narrowness of the tip.

I think the dot-product is used for component-wise multiplication here as opposed to any particular geometric purpose.

Plane:

Capture.PNG

The plane formula is straightforward, it prevents values from one axis to be accepted into the SDF, and the other two axii are what the plane is aligned to.
The plane normal being (0, 0, 1) for instance will only allow values on the xy plane in the SDF (as seen in the example).

The n.w bit is for a plane offset in/opposite to the direction of the normal.

Hexagonal Prism:

Capture.PNG

As before with the boxes, the abs(Position) is used to exploit the symmetry of this shape about its center.

h.x changes the radius of the hexagon.

h.y changes the thickness of the prism.

The q.z value is what controls the thickness axis.

The second set of values, q.x * 0.866025 + q.y * 0.5 are what control the shape of the hexagon, with the q.x * 0.866025 changing the width of the hexagon and q.y * 0.5 changing the narrowness of the hexagon.
Manipulating the q.y can yield shapes like a diamond if you increase the scalar its multiplied against.

1.PNG
As far as I can tell the numerical constants are magic numbers that yield the correct shape of the hexagon.

It is beyond me how anyone figured those constants out in the first place (excepting trial and error), but that’s math for you.

The q.y value controls the up vector of the hexagon. Multiplying it with a scalar < 1 decreases the height and vice versa for a scalar > 1.

Changing the axii around will change the orientation of the hexagon.

A bit later on we’ll discuss how maximum and minimum operations can be used to create intersections and unions of shapes respectively, but this is a good opportunity to see their mathematical use in an SDF.

The max here (and also in the box SDFs, but this aspect of them is more transparently visible here) is an intersection that acts as an upper bound to the values that are accepted into the SDF. Take the first max away and the hexagon becomes an infinite hexagon along the thickness axis.

1.PNG
Take the second max away and you’re left with an infinite diamond.

1.PNG
Triangular Prism:

1.PNG

This is very similar to the hexagonal prism except that the second max operation uses the original position as opposed to the absolute value of it.

Capsule/Line:

Capture

This is a strange one, requiring 4 inputs unlike most of the others which need only 2 or 3.

The input a controls where the capsule starts, and can also serve as an offset for the capsule.

The input b controls where the capsule ends and controls the direction in which the capsule extends.

Controlling the axis along which the capsule is oriented is just a matter of changing the CapsuleStart and CapsuleEnd, as opposed to any change in the axial combinations within the actual shader math.

The primary function of this SDF seems to be create a line as opposed to a capsule, and the capsule is a neat extra that you can gain by subtracting the single float value ‘CapsuleRadius’ at the very end (remember what I said about how subtracting a single float value gives you shapes of spherical nature?)

To compare this to another geometric operation, this seems similar to sweeping along a line with a certain radius, resulting in a capsule.

pa would give you a local vector with respect to the starting of the capsule, and ba would give you the non-normalized direction vector of the capsule, a line along which points would be accepted.

the calculation of h is what seems to control the actual “sweeping”, in creating a ratio between the dot products.

I’m not 100% certain why the ratios of dot products is being used here, but I’m under the impression that it constrains values that are out of the range of the capsule SDF. Any value that is beyond the SDF would result in the clamp giving the extreme values of 0 and 1.

In turn the h value is used in the next line, and if its at (or close to) the extreme values of 0 and 1, it would result in the length being a large positive value, which would mean the tested position is out of range of the SDF.

As it may be evident, math is hard, and I’m no mathemetician so I can only guess at the purpose of these operations and hope that my conclusions are accurate. I would welcome any feedback or corrections if I made a mistake, so feel free to provide them.

The clamp is what confines the SDF at the ends to taper to the capsule tips. Without it the result is an infinite cylinder.

2.PNG

Capped Cylinder:

2

Similar to the infinite cylinder case we start with the p.xz two-axial combination here, but then we also use the abs to exploit the symmetry of it about its origin.

Changing around the axii in p.xz and p.y changes orientation of the cylinder. P.xz is the radial axis, and p.y is the up axis.

h.x controls radius of the cylinder, h.y controls the height.

The vector d is formed from a series of straight line distance checks.

The xz plane is the plane of the radius, and the length of the vectors on it is found, and compared against the desired radius, the same happens for p.y on the up axis with h.y. If the result is <= 0, it would probably be within the SDF, but there are more ops to do still before we can know for sure.

The min & max ops don’t seem to be necessary here? Removing them didn’t affect the shape of my cylinder at all.

the max of d with 0 is to remove any negative values, as those would be within the SDF anyway but would return a large positive value when the length is taken, similar to what happened with the box SDFs. (thus invalidating a correct point which is actually within the SDF)

Capped Cone, Triangles, Quads:

2
I couldn’t really understand this one, and I couldn’t get it to work either. Let’s say its been left as an exercise to the reader.

2
Way too much work, use a thin triangular prism instead.
Capture
Ditto for the quad, use a thin box instead.

Second Iteration:

Making models more complicated than the basic primitives would be very tedious considering that for every shape that you needed, you’d have to plugin a bunch of inputs to the main raymarching node in the right order which wasn’t straightforward to see and liable to change if you needed to change the order in which your shapes were input for some reason. It just wasn’t practical to do.

1234.PNG
In this picture, each custom node has been wrapped in a Material Function, which does help in making them more reusable, as those are now accessible to any material, but you still have the issue of plugging inputs in a fragile compiler dependent order.

Doing these by code would be far faster, which defeats the purpose of using visual programming.

When I was looking for a solution for these problems I stumbled across this little gem in his blog where Ryan Brucks said that it was possible to insert HLSL functions that could be called from custom nodes.

1234.PNG
I tossed all the SDFs (with HLSL functions wrapped around them) into ‘Common.ush’ and then was able to call these functions from the custom nodes.

This means that the material functions created earlier could serve as an in-engine documentation for the SDFs, and you can then call those SDFs directly with code in the custom nodes, removing 1 pin you need to connect for every SDF (the pin of the SDF custom node/material function) from the process.

DQv1AojVoAET0Wc
The other issue of still having to plug in the inputs for these functions you call in code, as opposed to having all that bundled into a single input, still exists.

However this method allows for a single generic raymarch node to be used with inputs added in as needed as opposed to in the previous version where I tried to make raymarch nodes generic by how many inputs were needed by a single SDF function, which resulted in way too much spaghetti.

It also makes the process of compositing different SDFs into a single more complex shape, more feasible and scalable.

Of course the solution is now a half visual programming and half coding based approach, but there are no free lunches.

Distance Operations:

Now that the more elementary shapes are out of the way, we can start to do more cool stuff with the distance fields, like manipulate them with other operations in order to add, subtract and intersect them.

2
How these are supposed to be used is fairly self explanatory, the distance ops take as arguments the results from the distance fields and return the result of the respective operations they perform to the raymarcher.

dRVn9xDtqF-iloveimg-compressed
In this screenshot I am still using the SDFs as custom nodes, but the distance operations are done using another neat bit of HLSL hackery which will be in another article

The math behind them is also fairly straightforward:

Union:

The union acts as a lowerbound using the minimum, and hence biases the output towards the more negative values, meaning that, for example:

In the case of a sphere of radius 6 and a cube of side 4, both centered at the origin, the point (0, 5, 0), the result of their SDFs for this point would be -1 and 1 respectively, the union would use the minimum value and hence accept the sphere output of the SDF instead of the cube output. Hence BOTH the sphere and the cube are rendered where previously only one would be.

Intersection:

The intersection is the opposite of the union where the maximum is used instead and biases the output towards more positive values, so where the union would accept the volume generated by both volumes, the intersection only accepts the most conservative estimate of both volumes, i.e. the regions that are DEFINITELY within the SDFs of both volumes.

Subtraction:

Subtraction could be said to be a special case of the intersection where by negating the values of one SDF you bias the output towards accepting the values of only one output and completely excluding the other, which gives you subtraction of the volumes.

Domain operations:

Repetition:

12345
This was an operation I REALLY wanted to learn because who doesn’t love infinitely repeating M.C. Escheresqe stuff life this?:

HighresScreenshot00002.png

An issue I ran into though is that the GLSL mod and HLSL fmod aren’t completely equivalent, which is very nicely explained in this article.

I was able to perform the repetition, to a degree, but had weird artifacts like this:

DQyhai4VoAInCh_
To fix this, I added another function to the ‘Common.usf’ for a replacement of the GLSL mod and used that instead, which fixed the positioning errors in the repetition, but still had issues with the raymarcher for some reason:

DQ1NjdyUIAId2I3
The 9bitScience article had this warning about operations that act on the input position of the SDFs:

we.PNG

Which meant that by multiplying the result of the distance field with a scalar < 1, increasing the steps the raymarcher takes, and increasing the precision of the accepted distance of the raymarcher, I could finally get the results of the repetition operation correctly:

HighresScreenshot00002.png
The edges of some of the boxes at the edges are still raggedy though and that stumped me for a bit until I stumbled across this video:

This video showed me that the repetition function shown on Inigo’s article might be wrong or might not work in Unreal’s setup for whatever reason as at about 18-19 mins in he shows the implementation they use in their tool which includes a ‘+ 0.5 * spacing’ as a compensation to the domain position being supplied.

2345.PNG
This finally fixed everything for me.

HighresScreenshot00003.png

Rotation/Translation:

412
Translation isn’t a big issue considering that the raymarching is already attached to an object position and hence moves with the actor anyway.

There’s two issues with implementing the rotation operation though.

One is that matrices can’t be provided as input in the material editor, though I have seen some material functions for transforming vectors that take individual basis vectors which might work if you construct the matrix in code, HLSL documentation for which you can find here.

The other issue is that unlike GLSL, HLSL doesn’t have a built in function for determining the inverse of matrices, and has a transpose method instead, which if the model matrix is orthogonal would work but that isn’t the case if the model matrix encodes a translation or scaling, as those do not preserve orthogonality.

This Stack-Exchange answer explains nicely why the transpose wouldn’t work for a model matrix.

Capture

However, I managed to hack in a limited amount of rotation functionality using some HLSL shader constants that are set per material/object before the shader is evaluated.

These are called ‘Primitive uniform shader parameters’ in Unreal and for people more familiar with GLSL (as I am), these are more simply called ‘uniforms’.

The documentation for them is decent though and lists all the ones you can use and a short comment about the purpose of the parameter.

The one you need here is ‘Primitive.LocalToWorld’, a matrix variable that contains the model transforms we need, in particular the rotation transform.

For more information on matrix transforms, this link has been my go-to reference for years now.

12.PNG
Keeping in mind that scaling can mess up distance fields (as its an operation that doesn’t preserve vector length), you also need to set the scale values in the model matrix back to 1 so that they don’t affect the SDFs you use them on.

After that its just a matter of multiplying it against the position input for the SDF you want to be rotated, and you have rotation of the SDF when you rotate the object!

The rotation doesn’t seem to play well with the raymarcher extreme angles (as can be seen in the gif below), it also seems to not work well with the domain repetition operation if you rotate stuff beyond a couple of degrees.

Rotation.gif
It’s being applied to an individual SDF object here, not both of them


Using SDFs with non-Euclidean norms:

Capture.PNG

These can be implemented pretty easily, just make a new length function that also takes a ‘norm’ float argument in ‘Common.ush’ and you’re good to go, you can call it from other SDF functions that you add for the shapes using the new norms.

41

55.PNG
Torus88

126
The inverse multiplication inside the primitive call confused me, but I read on a forum thread on pouet that it is to counteract the scaling of resultant distance which messes with the distance field. The reason for this being rotation and translation preserve the length of vectors, but scaling does not and Inigo mentions this in his article also.

One issue with this operator is that its hard to generalize, because it needs to be applied to both the inside and the outside of a primitive call, for each distance field operation. Thus it needs to be invoked on an individual basis for each SDF as opposed to having a function to call to do it a single time.

To6Hcuj862.gif


Deformation operations:

65.PNG

Displacement should be obvious, it serves to distort the primitive SDF in some manner, so I think its just a matter of trying out different functions and seeing what works for your intended look.

125
In the image above I’m using the displacement function Inigo mentions in his article.

The Blend operation’s purpose seems to be to eliminate the discontinuities from the union operation, but more simply it’s a function that allows for smooth interpolation between SDFs where the union would do a hard joining operation.

giphy

Twist and Bend:

123
The problem with these last two operators is that they aren’t giving the expected results, even with changes to the constants being used and changes to the axii to account for the different coordinate systems.

What I get are odd smeary results from the twist operation:

155
156.PNG
And results that kind of look like they’re working at some viewing angles but fall apart at others from the Bend operation.

Will keep tweaking and update this post if I get them to work right, but for now that more or less does it.

Feel free to reach out to me @nightmask3 on Twitter!

Thanks for reading!

Technical Art Demo Reel

 

A compilation of work I’ve done using Blender, Houdini, Adobe Creative Cloud, Quixel Suite, xNormal and Unreal 4.

What follows is a breakdown of what’s happening in each shot:

1) Python tool for animating cinematics:

A tool that allowed the user to create manipulators bound to different properties, that they could then animate in the Sequencer tool. This allowed access to lower level C++ properties or animating properties inside of structs, something that base Unreal does not provide.

It was also built into an editor interaction mode of its own and allowed the user to pick the actor to animate using an eyedropper tool.

2) Python tool to render curve data as a spline:

A tool that allowed the user to visualize their FBX animation data in the form of a spline curve. It utilized Unreal’s built in spline component.

3) Interactive Electricity/Lightning effect:

After watching the Unreal Dev Days talk given by Alan Willard, the Senior Developer Relations Tech. Artist at Epic:

and his subsequent demonstration of that same effect on the UE4 livestream:

I wanted to try to replicate the effect as it depends on the interaction of all the systems of Blueprints, Materials, lights, particles and sounds in order to achieve the final effect, something I’ve never done before, and seemed like a good challenge to improve my knowledge of Unreal.

For my implementation I left out the sound but replicated everything else, as far as I can make out. I was particularly impressed by the range of options that the system provided to designers in order to tweak the effect however they want, and especially how the same asset could safely and effectively be used in multiple scenarios to achieve a variety of different visual compositions.

The system itself isn’t really that complicated once you break it down either, which is all the more impressive to my mind. It just involves raycasting in a random direction, spawning a new electric arc and particles if that cast hit something, and repeating ad nauseum.

Most of the magic really happens inside the Material, which doesn’t utilize any textures at all in order to produce the noise in the electricity, and instead uses overlapping Fast Gradient Noise at different levels of scale and tiling to produce the distortion, from the inbuilt Noise node.

The electric arc itself is a spline with a simple cylinder mesh chosen to be stretched along that spline, and the electricity Material is applied to that cylinder. This bit is accomplished using Unreal’s ‘SplineMeshComponent’ functionality.

When an arc is spawned the starting location is the origin of the spline and if the raycast hits something, the impact location is set as the end of the spline, which is then updated along with its mesh.

4) Caustics Generator:

I was working on a light study of Blade Runner 2049, of the scenes inside Wallace Corporation, a screenshot of which I’ll include:

LuvOffice

In order to achieve a setup similar to this, I figured I’d need to learn how to generate caustics.

There’s a couple of ways to achieve this that I read of:

1) Jos Stam’s method to generate periodic caustic maps: https://www.opengl.org/archives/resources/code/samples/mjktips/caustics/

This method looked interesting and like most of Jos Stam’s work is seminal, but is a bit outdated for current quality requirements.

2) Realtime caustics:
View at Medium.com

This method seemed more feasible and delivered higher quality, but it still didn’t meet the bar for what I wanted in this scene for two reasons:

1) It seemed like the final quality is very dependent on the resolution of the wavefront mesh, surface mesh and the grid plane you are projecting it onto, which is definitely not ideal for a realtime environment where you might want results that could even hold up in 4K and cinematics and so forth, which I think this method could not scale up to, even with GPU computation.

2) The method doesn’t seem to allow for much tweaking or changes to be achieved in the final look, limiting its potential as an actual technical art tool.

I kept searching until I stumbled across the tech demo presented by Ryan Brucks, principal technical artist at Epic, during GDC 2017, where he demonstrated a method in which he baked the results of the caustic simulation into a flipbook which could then be used at runtime with a much lower cost.

I had no idea how to achieve results like that in Unreal, but I had also seen another video where a Houdini user demonstrated something remarkably similar:

I put two and two together, and figured that I could probably generate the simulations and flipbooks in Houdini, import them into Unreal, and then play those flipbooks back in a Material and I’d have the caustics as I needed them.

I reached out to the Houdini user on a forum post and asked them about their method and they were very helpful in describing which nodes to use and what the principle behind the method was.

After a week or two of hacking away at Houdini I had what I wanted, a tool that let you generate caustics and then bake out the flipbooks.

Doing this in Houdini, has three advantages that I see:

1) Can bake out the flipbooks at any resolution, even really large ones that Unreal cannot usually support Render Targets for without causing massive slowdowns and sometimes crashes.

2) Can use different types of input noise to generate the caustics and change a bunch of parameters and options to tweak their final look before baking.

3) Method is exactly the same as the one mentioned in the Realtime caustics blogpost, but because its carried out offline, I could increase the mesh resolutions as much as I wanted until the quality met the bar I had in mind.

I also did a bit of work after this in order to make the flipbook textures tileable using information from this blogpost, also by Ryan Brucks:

https://www.shaderbits.com/blog/tiling-within-subuv-or-volume-textures

This gave me tileable caustic flipbooks that could be animated within Materials using a ‘Flipbook Animation’ or ‘SubUV_Function’ node and then used as a Light Function Material or a simple Surface Material or even as a Decal Material if you wanted.

5) “The Wanderer” character model:

I’m currently working on my own game, the working title being ‘Project Gilgamesh’.

It’s a third person parkour platformer, and I wanted to take the opportunity to learn how to make a next-gen character model that utilized Unreal’s newer cloth/rigid body physics and anim dynamics features that were introduced after 4.16.

The character modelling was done in Blender, and was based in large part on these pieces of concept art for the Dr.Strange movie:

There’s also some influence from the Adeptus Mechanicus from Warhammer 40K:

2771c3874a191ac855a770f46de3d60c

UV mapping was also done in Blender.

The high poly normals were baked down in xNormal before being imported into Quixel.

Texturing was done using Quixel Suite and Photoshop CC. All the textures were worked on at 4K and then rendered down to 2K at export.

The final skeletal mesh had 4 material slots, meaning 4 draw calls for the player per frame. I might add one more for all of the emissive points on the player like the eyes, lights from the gas mask etc.

I remember reading somewhere that Unreal recommends staying within 3-5 draw calls/materials per object, though I could not find that link again for a citation so it might be hearsay, take with a grain of salt.

The character’s poly count is a little bit on the higher side, even with current gen hardware in mind, but my reasoning behind this is that I’m not going to have many (if any) other characters that will need this level of detail by far.

Most other models will either be environmental models, props and maybe a few robotic enemy types.

The base biped model was rigged in Mixamo, brought back into Blender, skinned with clothes, gas mask and belt and then imported into Unreal.

Animations that were applied to the rigged unskinned model in Mixamo could then be imported into Unreal and worked right away on the skinned version as their skeletons were the same.

Cloth was animated using Unreals new Nvcloth realtime solver and cloth toolset which allows you to paint values for the cloth simulation onto the mesh directly in the editor.

This saves a HUGE amount of time compared to the APEX cloth program that was released by Nvidia, which was also rather complicated and obscure without a lot of up-to-date documentation either.

However there are never any free lunches and there are still some drawbacks to using this cloth method, such as how more complicated cloth setups like thick cloth or multi-layered cloth setups would not be possibly with the toolset as it is currently.

Capture

Also there’s the obvious drawback of how real time cloth is never really going to look as good as cloth that is simulated offline. Not for a very long time at least.

On the whole though I think this toolset is worth learning, if you can work around the problems with it and/or they don’t matter to your use case.

The character isn’t finished yet, but its close. I want to add other things like an additional belt strap that is attached on two sides, which I’ll probably try to do with rigid bodies or anim dynamics, which allow you to drive additional bone/socket motion on top of the character’s skeletal animations.

6) Volumetric fog using custom node and HLSL:

A demonstration of a raymarched volumetric fog effect I’ve been working on for a while.

Based on the ShaderToy shader found here:
https://www.shadertoy.com/view/XtfSWX

The effect raymarches forward a certain distance, samples that point in world space, uses it to generate some noise (triangle noise is what the author of the shader on ShaderToy called it), and adds a specified color to it, which is blended with the scene texture based on the scene depth, so that nearby objects are still visible/not occluded.

Does face some issues with Translucency (always troubling in a Deferred rendering setup) and can be solved by placing the post-process effect before the Translucency step in the pipeline, at the cost of a performance hit.

Uses the Custom HLSL shader node in the Materials system.

I was working on this before Epic shipped their own volumetric fog solution. You could probably achieve the same effect using volumetric particles and their global fog but I haven’t attempted to port it or experiment with that so far.

 

 

Unconventional ways to make Emissive Masks

I’m always looking for new ways to add interesting visual complexity to my projects.

Personally, I have a taste for abstract and geometric art that gives off an air of mysticism and mystery, stuff like fractals, alchemical symbols, runes and glyphs are things I find very visually stimulating, and pay special attention to.

Some examples I found noteworthy are below:

Doctor-Strange-Geometric-Magic-2
Dr. Strange
gallery_19474934345_53b42ffe7a_h
Journey

Making these symbols is often going to be a process that is completely dependent on what kind of game you are working on, what kind of world it is set in, what kind of feel you aim these symbols to convey and many other factors like this.

This blog post is not intended to address the semiotics and design portion of these symbols, that is a separate topic for another day.

What it IS intended to do, is to provide the reader with some idea of how they might achieve cool looking effects like this in their own projects.

For the purpose of this post I’m going to be using Unreal Engine 4, but in practice I believe all the knowledge would be entirely applicable to Unity or Godot, maybe even your own custom engine, really any system that has a renderer with alpha transparency.

Using photography of real life symbols:

This slideshow requires JavaScript.


Often the best inspiration is found in the real world. The metal plates you see above are used in rituals in Hindu culture, and have a very appealing geometric aspect to them in my opinion.

Here’s what the same symbols look like in UE4:

This slideshow requires JavaScript.


What I did was:

To take some high-contrast photos of these plates, trying to eliminate all shadow and light information from that photo.

Open up those photos in Photoshop, and edit the pictures with the objective of converting the image into what could be thought of as a mask or heightmap (if you’re familiar with terrain generation this’ll probably be more relevant).

This is what the output should look like:

PatternGlowMap.png

Then just import the texture into Unreal/engine of your choice and setup the Material to look something like this:

Capture.PNG
Choosing to use this as a decal or just as a normal Surface material is up to you, but I personally find decals to work great for the use case of applying these glyphs/runes to the world.

Something to take advantage of as Unreal supports decal normals, is to generate a normal map from the mask output of Photoshop/Your-editing-software-of-choice, this adds a little more lighting info. and response for the decals.

I used an online normal map generator, but I’ve since found out that solutions like this (which also includes stuff like CrazyBump) are not advisable for generating textures for production, as the tangent spaces they operate in may or may not have any correlation with Unreal’s own MikkSpaceTangent implementation, and are generally just bad anyway.

This results in textures that will produce incorrect/inaccurate lighting responses, especially in a PBR setup.

(And if you aren’t using a PBR setup while working on Unreal or Unity right now, you should really be using a PBR setup. Even stylized aesthetics still gain a lot from respecting PBR rules)

That about covers this method though.

Use WeaveSilk to generate masks:

This next bit is pretty straightforward, but is also fun to do!

There’s an online interactive art generator called ‘WeaveSilk’ that allows users to generate complicated geometric art with just a few drags and clicks of the mouse using algorithmic generation from user input.

84385c5b97f80e32fbde4f7d2fdc1aac.gif

This lets you generate some very interesting looking patterns and symbols with minimal effort and a high iteration speed.

Saving these symbols out is also just a right-click operation away.

Once obtained, you can edit it to remove the color information and convert it into a proper mask. This mask is then used in a similar way as in the above material to create results like this in-engine:

This slideshow requires JavaScript.


And that’s about it for this post.

I hope that by sharing some unconventional ways to generate masks, I can give other people some ideas to help them start working on their own ways to create these cool geometric symbols and patterns we all love so much.

Hope this is helpful!

I’m on Twitter and you can reach me @nightmask3 if you need any help or have a clarification.

Vertex-based Animations using Morph Targets in UE4

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 8 months, and I wanted to share some of the techniques I had learned in this time.

The topic of discussion today is Vertex based animations.

Before we get to that, let us digress for a wider examination of animation pipelines in general, to gain a little perspective.

Animation is probably one the most important factors in making a game feel responsive and lifelike (not to be confused with realistic). Animation is in essence the art of making something static feel like it is in motion.

Human beings are great at perceiving motion and as such it’s one of the more prominent ways with which games engage players, both mechanically and aesthetically.

In 3D animation, there are a few broad categories with which animation is implemented, each with their own pipelines and quirks:

  1. Vertex animation: This used to be the most widespread way to animate things back when the first wave of 3D games came out and the hardware wasn’t usually capable of handling the complicated (and expensive) concatenated transformations of skeletal animations. It’s easy to do but hard to reach good results with. It also happens to be what we’re talking about today.
  2. Skeletal animation: This method is the most commonly used for 3D animation nowadays, and when supplemented with the powerful tool of Inverse Kinematics it can achieve really smooth and organic results. It’s probably the most flexible out of all of these methods, and the sheer number of tools for this pipeline make it a very attractive option.
  3. Procedural animation: This method is a little less commonly seen, and I can guess that’s because it is probably non-trivial to implement and has very specific use cases as opposed to the other two methods. Some games that use it to good effect include Grow Home, QWOP, Overgrowth and so forth.
  4. Physically-based Simulation: Similar to the procedural animation though slightly different, this is animation that utilizes simulation of some natural substance like fluid, cloth in order to create the animation of the mesh. A popular game that uses this method is Journey with its cloth creatures and main character created almost entirely out of cloth.

With that out of the way, let us explore how to get vertex animations in UE4.

The method that is described here involves morph targets.

In a morph target animation, a “deformed” version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions.

That’s fairly simple to understand. And you can see why it falls under the blanket of a vertex animation.

The kind folks at Epic have written a MaxScript that enables any Editable Poly mesh with keyframed animations to have its animations written out into a texture. The only downside to this method is that the mesh can have at max 8192 vertices.

In your Engine installation folder look for “Epic Games\EngineVersion\Engine\Extras\3dsMaxScripts\VertexAnimationTools”

It’s straightforward to use and this tutorial will explain the details behind it and the material you will need to create in order to view this animation in the engine:

But just in case, here’s a screenshot of that material too:

whisky11-debug

Hope this is helpful to someone!

Cloth in UE4

 

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 – 8 months, and I wanted to share some of the techniques I had learned in this time.

The topic we are going to explore today is Cloth, and how to implement it in Unreal 4. Before we do that, let us examine some of the purposes that cloth is applied to within games.

Cloth is an extremely important part of a game developers toolbox. It is a material that exists in abundance in reality, and regardless of scenario and settings, every game can probably make some utilization of cloth, or use cloth physics to achieve some convincing visual effect (I’ve seen it used for hair as well).

The screenshot below is of the game ‘Journey’, a personal favorite of mine, and a wonderful game that takes cloth simulation and applies it to creatures, mechanics, architectural flourishes and even the main characters entire body.

journey_12

 

How Unreal 4 implements cloth is using the APEX Physics SDK. The technical details of this implementation are beyond the scope of this post, but suffice to say that in order to bring cloth into Unreal 4, you will need to use either the standalone APEX cloth tool or the APEX cloth plugin built into Max or Maya.

https://developer.nvidia.com/gameworksdownload#?dn=physx-apex-sdk-1-3-0

You will need an NVIDIA developer account to download it. It’s a quick and free registration thankfully.

This article isn’t a tutorial on how to implement cloth, only a demonstration of it, but here’s a link to the tutorial I used:

https://www.youtube.com/watch?v=uTOELBNBt04&t=1003s

He makes use of Blender to attach, rig and skin the cloth mesh, before exporting it into the APEX cloth tool.

In the cloth tool, the cloth simulation is defined by painting the max displacement values directly onto the cloth mesh. The character model must also have collisions generated for it so that the cloth can collide with the character model.

The simulation can be previewed and the environment settings tweaked to observe how it reacts under different parameters. The simulation asset can then be exported and saved as a separate file.

Import this asset into Unreal and then apply it at the Material level. The cloth should be working now.

Some important tips that hopefully will save you the time I spent tearing my hair out when I couldn’t figure out why the cloth mesh disappeared as soon as I applied the physics asset:

  1. Skin the cloth mesh. If you don’t, bad things will happen.
  2. Make sure the cloth mesh is rigged to the appropriate bone. If you don’t, bad things will happen.
  3. Make sure to disable “Add Leaf Nodes” in the Armature tab of the .FBX export settings in Blender. If you don’t, bad things will happen.

Hope this is helpful to someone!

Ground Fog in Unreal 4

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 months, and I wanted to share some of the techniques I had learned in this time.

My previous post dealt with implementing Distance Fog using a Post-Process material.

This time, we are going to explore how to implement a Ground Fog in Unreal 4.

Ground Fog is very important for a variety of reasons.

Here is the same scene as above, without the Ground Fog.

groundnofog
Scene without Ground Fog

A couple of things you can notice:

  1. The scene still looks okay, but overall lacks any visual complexity.
  2. The color of the sand is now too repetitive and dominates the view.
  3. Less easy to differentiate between the foreground and background, the distance fog that we see helps somewhat though.

So, we can see how Ground Fog can add to the overall aesthetic of a level. Let us now proceed to the implementation itself.

fogtexture
Get UV values for Fog Texture

This set of material nodes, is responsible for raycasting forward a certain distance (ML_Raycast), finding a world position and scaling that by NoiseSize.

raycast
Raycast Material Function

This world position is then fed into MF_NormalMaskedVector and what is obtained then is a UV value for the moving fog texture, by masking the input WorldPosition with the Vertex normal.

normalmaskedvector
Normal Masked Vector Material Function

The output of the moving fog is then multiplied (in my case I use Add, it works for this case but might get weird results otherwise), and then you use a if statement to define a World-Z cutoff for the fog.

If the worldZ of the vertex being drawn is lesser than the cutoff value, we multiply the moving fog color value into the post-process output. You can think of this as a simple if-conditional check.

if(PixelWorldZ < CutoffValue)
DisplayFog();

Then, in order to make the fog fade smoothly until the cutoff value is reached, we use another if statement to check the distance between the cutoff Z value and the current pixel world Z value. If the pixel is within the gradient fade range (as defined by GradientRange), then we lerp between the color output of the fog and 1.

if(WorldZCutoff – PixelWorldZ < GradientRange)
LerpBetweenFogColorAndSceneColor();

If the output is 1, we use only the scene color.

groundfogmaterial
Gradient and final material output

The final output of all this is multiplied into the PostProcessInput, and then fed into the Emissive Color.

This material uses the Post-Process material domain. Assign it to a Post-Process Volume, and you should be good to go.

Hope this is helpful to someone!