Technical Art Demo Reel

 

A compilation of work I’ve done using Blender, Houdini, Adobe Creative Cloud, Quixel Suite, xNormal and Unreal 4.

What follows is a breakdown of what’s happening in each shot:

1) Interactive Electricity/Lightning effect:

After watching the Unreal Dev Days talk given by Alan Willard, the Senior Developer Relations Tech. Artist at Epic:

I wanted to try to replicate the effect as it depends on the interaction of all the systems of Blueprints, Materials, lights, particles and sounds in order to achieve the final effect, something I’ve never done before, and seemed like a good challenge to improve my knowledge of Unreal.

For my implementation I left out the sound but replicated everything else, as far as I can make out. I was particularly impressed by the range of options that the system provided to designers in order to tweak the effect however they want, and especially how the same asset could safely and effectively be used in multiple scenarios to achieve a variety of different visual compositions.

The system itself isn’t really that complicated once you break it down either, which is all the more impressive to my mind. It just involves raycasting in a random direction, spawning a new electric arc and particles if that cast hit something, and repeating ad nauseum.

Most of the magic really happens inside the Material, which doesn’t utilize any textures at all in order to produce the noise in the electricity, and instead uses overlapping Fast Gradient Noise at different levels of scale and tiling to produce the distortion, from the inbuilt Noise node.

The electric arc itself is a spline with a simple cylinder mesh chosen to be stretched along that spline, and the electricity Material is applied to that cylinder. This bit is accomplished using Unreal’s ‘SplineMeshComponent’ functionality.

When an arc is spawned the starting location is the origin of the spline and if the raycast hits something, the impact location is set as the end of the spline, which is then updated along with its mesh.

2) Interactive fluid simulation using Materials, Blueprints and Render Targets:

This effect relies on the interaction of several systems:

  • Material system for fluid simulation – The code for the fluid simulation itself happens within the node based shader system in Unreal. It uses the Shallow Water Equations method published by Jos Stam in order to minimize the complexity of the fluid simulation code and essentially reduce the problem from 3 dimensions to 2.5 dimensions. The basic idea is that the water is treated as a 2D grid where each grid point has a height value, and any changes in height value at any point is distributed to the nearby grid points.
  • Blueprints system for interaction –  The system can respond to physics objects that collide with the water plane, and this is enabled by having the Blueprint actor communicate a change in water height to the Material system whenever there is a collision with the water plane.
  • Render targets – The output of the Material system is fed to a render target which is basically a dynamic texture that updates every frame. Render targets are also used to feed the output of a frame into the Material system so that the next frame can be calculated on the basis of the previous frame.

3) Python tool for animating cinematics:

A tool that allowed the user to create manipulators bound to different properties, that they could then animate in the Sequencer tool. This allowed access to lower level C++ properties or animating properties inside of structs, something that base Unreal does not provide.

It was also built into an editor interaction mode of its own and allowed the user to pick the actor to animate using an eyedropper tool.

The tool also provided a number of convenience features and modes for the manipulators in order to change the way they moved, interacted and controlled the property they were manipulating and allowed for the user to select different components of the manipulator quickly.

4) Caustics Generator:

I was working on a light study of Blade Runner 2049, of the scenes inside Wallace Corporation, a screenshot of which I’ll include:

LuvOffice

In order to achieve a setup similar to this, I figured I’d need to learn how to generate caustics.

There’s a couple of ways to achieve this that I read of:

1) Jos Stam’s method to generate periodic caustic maps: https://www.opengl.org/archives/resources/code/samples/mjktips/caustics/

This method looked interesting and like most of Jos Stam’s work is seminal, but is a bit outdated for current quality requirements.

2) Realtime caustics:
View at Medium.com

This method seemed more feasible and delivered higher quality, but it still didn’t meet the bar for what I wanted in this scene for two reasons:

1) It seemed like the final quality is very dependent on the resolution of the wavefront mesh, surface mesh and the grid plane you are projecting it onto, which is definitely not ideal for a realtime environment where you might want results that could even hold up in 4K and cinematics and so forth, which I think this method could not scale up to, even with GPU computation.

2) The method doesn’t seem to allow for much tweaking or changes to be achieved in the final look, limiting its potential as an actual technical art tool.

I kept searching until I stumbled across the tech demo presented by Ryan Brucks, principal technical artist at Epic, during GDC 2017, where he demonstrated a method in which he baked the results of the caustic simulation into a flipbook which could then be used at runtime with a much lower cost.

I had no idea how to achieve results like that in Unreal, but I had also seen another video where a Houdini user demonstrated something remarkably similar:

I put two and two together, and figured that I could probably generate the simulations and flipbooks in Houdini, import them into Unreal, and then play those flipbooks back in a Material and I’d have the caustics as I needed them.

I reached out to the Houdini user on a forum post and asked them about their method and they were very helpful in describing which nodes to use and what the principle behind the method was.

After a week or two of hacking away at Houdini I had what I wanted, a tool that let you generate caustics and then bake out the flipbooks.

Doing this in Houdini, has three advantages that I see:

1) Can bake out the flipbooks at any resolution, even really large ones that Unreal cannot usually support Render Targets for without causing massive slowdowns and sometimes crashes.

2) Can use different types of input noise to generate the caustics and change a bunch of parameters and options to tweak their final look before baking.

3) Method is exactly the same as the one mentioned in the Realtime caustics blogpost, but because its carried out offline, I could increase the mesh resolutions as much as I wanted until the quality met the bar I had in mind.

I also did a bit of work after this in order to make the flipbook textures tileable using information from this blogpost, also by Ryan Brucks:

https://www.shaderbits.com/blog/tiling-within-subuv-or-volume-textures

This gave me tileable caustic flipbooks that could be animated within Materials using a ‘Flipbook Animation’ or ‘SubUV_Function’ node and then used as a Light Function Material or a simple Surface Material or even as a Decal Material if you wanted.

5) “The Wanderer” character model:

I’m currently working on my own game, the working title being ‘Project Gilgamesh’.

It’s a third person parkour platformer, and I wanted to take the opportunity to learn how to make a next-gen character model that utilized Unreal’s newer cloth/rigid body physics and anim dynamics features that were introduced after 4.16.

The character modelling was done in Blender, and was based in large part on these pieces of concept art for the Dr.Strange movie:

There’s also some influence from the Adeptus Mechanicus from Warhammer 40K:

2771c3874a191ac855a770f46de3d60c

UV mapping was also done in Blender.

The high poly normals were baked down in xNormal before being imported into Quixel.

Texturing was done using Quixel Suite and Photoshop CC. All the textures were worked on at 4K and then rendered down to 2K at export.

The final skeletal mesh had 4 material slots, meaning 4 draw calls for the player per frame. I might add one more for all of the emissive points on the player like the eyes, lights from the gas mask etc.

I remember reading somewhere that Unreal recommends staying within 3-5 draw calls/materials per object, though I could not find that link again for a citation so it might be hearsay, take with a grain of salt.

The character’s poly count is a little bit on the higher side, even with current gen hardware in mind, but my reasoning behind this is that I’m not going to have many (if any) other characters that will need this level of detail by far.

Most other models will either be environmental models, props and maybe a few robotic enemy types.

The base biped model was rigged in Mixamo, brought back into Blender, skinned with clothes, gas mask and belt and then imported into Unreal.

Animations that were applied to the rigged unskinned model in Mixamo could then be imported into Unreal and worked right away on the skinned version as their skeletons were the same.

Cloth was animated using Unreals new Nvcloth realtime solver and cloth toolset which allows you to paint values for the cloth simulation onto the mesh directly in the editor.

This saves a HUGE amount of time compared to the APEX cloth program that was released by Nvidia, which was also rather complicated and obscure without a lot of up-to-date documentation either.

However there are never any free lunches and there are still some drawbacks to using this cloth method, such as how more complicated cloth setups like thick cloth or multi-layered cloth setups would not be possibly with the toolset as it is currently.

Capture

Also there’s the obvious drawback of how real time cloth is never really going to look as good as cloth that is simulated offline. Not for a very long time at least.

On the whole though I think this toolset is worth learning, if you can work around the problems with it and/or they don’t matter to your use case.

The character isn’t finished yet, but its close. I want to add other things like an additional belt strap that is attached on two sides, which I’ll probably try to do with rigid bodies or anim dynamics, which allow you to drive additional bone/socket motion on top of the character’s skeletal animations.

6) Volumetric fog using custom node and HLSL:

A demonstration of a raymarched volumetric fog effect I’ve been working on for a while.

Based on the ShaderToy shader found here:
https://www.shadertoy.com/view/XtfSWX

The effect raymarches forward a certain distance, samples that point in world space, uses it to generate some noise (triangle noise is what the author of the shader on ShaderToy called it), and adds a specified color to it, which is blended with the scene texture based on the scene depth, so that nearby objects are still visible/not occluded.

Does face some issues with Translucency (always troubling in a Deferred rendering setup) and can be solved by placing the post-process effect before the Translucency step in the pipeline, at the cost of a performance hit.

Uses the Custom HLSL shader node in the Materials system.

I was working on this before Epic shipped their own volumetric fog solution. You could probably achieve the same effect using volumetric particles and their global fog but I haven’t attempted to port it or experiment with that so far.

7) Python tool to render curve data as a spline:

A tool that allowed the user to visualize their FBX animation data in the form of a spline curve. It utilized Unreal’s built in spline component.

Advertisement