Raymarching using Signed Distance Fields in UE4

ezgif.com-optimize.gif
Required (or at least highly recommended) reading:

http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm

https://www.alanzucconi.com/2016/07/01/signed-distance-functions/#introduction

https://9bitscience.blogspot.com/2013/07/raymarching-distance-fields_14.html

http://blog.hvidtfeldts.net/index.php/2011/06/distance-estimated-3d-fractals-part-i/

http://mercury.sexy/hg_sdf/

Preamble:

This article assumes that you are at least somewhat familiar with how the Material Editor in Unreal 4 works and have a working knowledge of HLSL, in that I won’t be stopping to define a lot of concepts beforehand.

Nothing except for the math here is really that complicated though, and if you have any experience with a game engine in general, you should be fine. (and the math can be ignored and the formulae used as is)

You should understand how the custom node functions, at least somewhat if not in entirety, and more importantly, how to use it.

This would be a good point to check out Inigo Quilez’s article on signed distance fields or this article on how to raymarch using them.

Also good is this article on distance estimated fractals.

In case you are having trouble understanding how to implement the raymarching, this series of articles by Alan Zucconi in Unity is a great primer and is how I first got to grips with the subject.

This is what your material setup should look like in order to get the raymarching of SDFs to work (set your shading model to ‘Unlit’ when you create the material):

12.PNG

Code in the ‘Raymarch’ custom node is here.

123.PNG

The ‘SphereSDF’ node should have the HLSL version of this function, which means vec3 becomes float3.

41.PNG

The argument ‘p’ is the ‘WorldPosition’ argument of the raymarch node or AbsoluteWorldPosition you will be passing in from the Material graph.

There’s some stuff with local and world space to deal with that’s under ‘Coordinate spaces and conventions’.

Making material SDFs:

What I did at first was to wrap all these SDFs in custom nodes in Unreal, so that I could plug them in and out of custom nodes along with their inputs in order to pass them on.

How custom nodes function makes it necessary for the smaller SDF functions and their inputs to be plugged into the base raymarcher function, so that the HLSL translator can pick up all of them and put the function calls in the right places.

Let me be clear, this is bad and not a sustainable way to work, and not just because how the translator names these functions is entirely opaque and liable to change.

It is an awful limitation on a language that is ostensibly supposed to be a visual coding language, as it makes the act of plugging in a distance field to use as a shape (the ideal way I envisioned this working) become arduous in comparison to just typing out the code instead.

It won’t scale to more complex models as it is currently, of that I am convinced, but as a learning experiment to understand how SDFs and raymarching work its still okay.

I figured that in case you were looking forward to modelling with SDFs right out of the box using the custom nodes I’d head your expectations off at the pass, because the material editor makes achieving that type of functionality pretty hard.

I figured out a half-solution later on, it’s located after all the SDF math.

Coordinate spaces and conventions:

For converting Inigo Quilez’s SDF functions into versions usable with transforms and world space objects, keep in mind that all of his functions assume the object is centered, and where he uses length(Position), instead you would use either length(PixelWorldPosition – ActorPosition) or distance (PixelWorldPosition, ActorPosition).

The SDFs seen here use the convention of negative values being within the object and positive values failing the distance test, in other places you might see this trend reversed so its something to keep in mind.

Boxes:

1.PNG
A trick to simplify SDF for objects symmetrical about their center (like box and rounded box) is the use of abs(PixelWorldPosition) means that evaluating positions both negative and positive with relation to the center of the object can be treated the same, this simplifies quite a bit I think.

The subtraction of the box dimensions from the position being tested will determine whether the point is in the box or not, as if it is negative, then the point MUST be within the volume, as a positive result means that the point is outside the box.

It’s simple vector math that’s going on here, addition is shifting a vector along another vector, and subtraction is shifting the second vector in the opposite direction of the first vector.

The result is then fed into a max-op with a zero vector to eliminate negative values from the length check, as negative and positive values of the same vector will give the same length, but if its negative we already know its inside and hence don’t care what the actual value is.

The max is just to isolate any positive values, and then a length check is done returned to the raymarching function, compared against the minimum distance (Alan’s tutorials define this as distance-aided raymarching), and if lesser than that, the point is within the box.

Torus:

1.PNG

Changing the swizzled operators of p.xz and p.y here to a different set of axii, would cause the torus to be aligned along a different axis.

P.xz determines which axii the torus extends along, P.y determines the axis of torus thickness.

Subtracting t.x from P.xz gives the external radial boundary, meaning t.x is the parameter that controls the outer radius of the torus. The length of p.xz and the value of p.y needs to be such that the resulting length of q is lesser than t.y (the internal radius), in order for it to be in the SDF.

The order of the elements that form q is irrelevant as we take the length of that vector.

Subtracting t.y from the final result gives the thickness of the torus.

It would seem that by combining any two axii and their vectors and taking the length of the resultant vector, we bias the SDF towards including points along those axii.
This same type of axial combination is used for cylinders too.

Cylinder:

Capture.PNG

This formula seems to be in a bit of error as changing values of c.xy dont affect the cylinder shape, but rather cause an offset of it from the center. They can be eliminated entirely as a translation can more easily/transparently be achieved by adding an offset to the position input.

As such the only parameter that affects the cylinder shape is c.z, which changes the radius of the infinite cylinder.

Similar to how the axial combination was seen in the previous SDF, p.xz affects which axii the cylinder is oriented along.

A consistent thing you can notice is that subtracting a single float value from a distance output of a primitive causes a shape of somewhat spherical nature to emerge from the SDF.

Cone:

Capture.PNG

Here the two axial combination of p.xy doesn’t determine which way the cone is pointed (as you might expect, or at least as I did), but rather which plane the base of the cone is aligned with.

Instead, p.z determines which axis is used for the tip of the cone to point to.

c.x controls the width of the base, c.y controls the narrowness of the tip.

I think the dot-product is used for component-wise multiplication here as opposed to any particular geometric purpose.

Plane:

Capture.PNG

The plane formula is straightforward, it prevents values from one axis to be accepted into the SDF, and the other two axii are what the plane is aligned to.
The plane normal being (0, 0, 1) for instance will only allow values on the xy plane in the SDF (as seen in the example).

The n.w bit is for a plane offset in/opposite to the direction of the normal.

Hexagonal Prism:

Capture.PNG

As before with the boxes, the abs(Position) is used to exploit the symmetry of this shape about its center.

h.x changes the radius of the hexagon.

h.y changes the thickness of the prism.

The q.z value is what controls the thickness axis.

The second set of values, q.x * 0.866025 + q.y * 0.5 are what control the shape of the hexagon, with the q.x * 0.866025 changing the width of the hexagon and q.y * 0.5 changing the narrowness of the hexagon.
Manipulating the q.y can yield shapes like a diamond if you increase the scalar its multiplied against.

1.PNG
As far as I can tell the numerical constants are magic numbers that yield the correct shape of the hexagon.

It is beyond me how anyone figured those constants out in the first place (excepting trial and error), but that’s math for you.

The q.y value controls the up vector of the hexagon. Multiplying it with a scalar < 1 decreases the height and vice versa for a scalar > 1.

Changing the axii around will change the orientation of the hexagon.

A bit later on we’ll discuss how maximum and minimum operations can be used to create intersections and unions of shapes respectively, but this is a good opportunity to see their mathematical use in an SDF.

The max here (and also in the box SDFs, but this aspect of them is more transparently visible here) is an intersection that acts as an upper bound to the values that are accepted into the SDF. Take the first max away and the hexagon becomes an infinite hexagon along the thickness axis.

1.PNG
Take the second max away and you’re left with an infinite diamond.

1.PNG
Triangular Prism:

1.PNG

This is very similar to the hexagonal prism except that the second max operation uses the original position as opposed to the absolute value of it.

Capsule/Line:

Capture

This is a strange one, requiring 4 inputs unlike most of the others which need only 2 or 3.

The input a controls where the capsule starts, and can also serve as an offset for the capsule.

The input b controls where the capsule ends and controls the direction in which the capsule extends.

Controlling the axis along which the capsule is oriented is just a matter of changing the CapsuleStart and CapsuleEnd, as opposed to any change in the axial combinations within the actual shader math.

The primary function of this SDF seems to be create a line as opposed to a capsule, and the capsule is a neat extra that you can gain by subtracting the single float value ‘CapsuleRadius’ at the very end (remember what I said about how subtracting a single float value gives you shapes of spherical nature?)

To compare this to another geometric operation, this seems similar to sweeping along a line with a certain radius, resulting in a capsule.

pa would give you a local vector with respect to the starting of the capsule, and ba would give you the non-normalized direction vector of the capsule, a line along which points would be accepted.

the calculation of h is what seems to control the actual “sweeping”, in creating a ratio between the dot products.

I’m not 100% certain why the ratios of dot products is being used here, but I’m under the impression that it constrains values that are out of the range of the capsule SDF. Any value that is beyond the SDF would result in the clamp giving the extreme values of 0 and 1.

In turn the h value is used in the next line, and if its at (or close to) the extreme values of 0 and 1, it would result in the length being a large positive value, which would mean the tested position is out of range of the SDF.

As it may be evident, math is hard, and I’m no mathemetician so I can only guess at the purpose of these operations and hope that my conclusions are accurate. I would welcome any feedback or corrections if I made a mistake, so feel free to provide them.

The clamp is what confines the SDF at the ends to taper to the capsule tips. Without it the result is an infinite cylinder.

2.PNG

Capped Cylinder:

2

Similar to the infinite cylinder case we start with the p.xz two-axial combination here, but then we also use the abs to exploit the symmetry of it about its origin.

Changing around the axii in p.xz and p.y changes orientation of the cylinder. P.xz is the radial axis, and p.y is the up axis.

h.x controls radius of the cylinder, h.y controls the height.

The vector d is formed from a series of straight line distance checks.

The xz plane is the plane of the radius, and the length of the vectors on it is found, and compared against the desired radius, the same happens for p.y on the up axis with h.y. If the result is <= 0, it would probably be within the SDF, but there are more ops to do still before we can know for sure.

The min & max ops don’t seem to be necessary here? Removing them didn’t affect the shape of my cylinder at all.

the max of d with 0 is to remove any negative values, as those would be within the SDF anyway but would return a large positive value when the length is taken, similar to what happened with the box SDFs. (thus invalidating a correct point which is actually within the SDF)

Capped Cone, Triangles, Quads:

2
I couldn’t really understand this one, and I couldn’t get it to work either. Let’s say its been left as an exercise to the reader.

2
Way too much work, use a thin triangular prism instead.
Capture
Ditto for the quad, use a thin box instead.

Second Iteration:

Making models more complicated than the basic primitives would be very tedious considering that for every shape that you needed, you’d have to plugin a bunch of inputs to the main raymarching node in the right order which wasn’t straightforward to see and liable to change if you needed to change the order in which your shapes were input for some reason. It just wasn’t practical to do.

1234.PNG
In this picture, each custom node has been wrapped in a Material Function, which does help in making them more reusable, as those are now accessible to any material, but you still have the issue of plugging inputs in a fragile compiler dependent order.

Doing these by code would be far faster, which defeats the purpose of using visual programming.

When I was looking for a solution for these problems I stumbled across this little gem in his blog where Ryan Brucks said that it was possible to insert HLSL functions that could be called from custom nodes.

1234.PNG
I tossed all the SDFs (with HLSL functions wrapped around them) into ‘Common.ush’ and then was able to call these functions from the custom nodes.

This means that the material functions created earlier could serve as an in-engine documentation for the SDFs, and you can then call those SDFs directly with code in the custom nodes, removing 1 pin you need to connect for every SDF (the pin of the SDF custom node/material function) from the process.

DQv1AojVoAET0Wc
The other issue of still having to plug in the inputs for these functions you call in code, as opposed to having all that bundled into a single input, still exists.

However this method allows for a single generic raymarch node to be used with inputs added in as needed as opposed to in the previous version where I tried to make raymarch nodes generic by how many inputs were needed by a single SDF function, which resulted in way too much spaghetti.

It also makes the process of compositing different SDFs into a single more complex shape, more feasible and scalable.

Of course the solution is now a half visual programming and half coding based approach, but there are no free lunches.

Distance Operations:

Now that the more elementary shapes are out of the way, we can start to do more cool stuff with the distance fields, like manipulate them with other operations in order to add, subtract and intersect them.

2
How these are supposed to be used is fairly self explanatory, the distance ops take as arguments the results from the distance fields and return the result of the respective operations they perform to the raymarcher.

dRVn9xDtqF-iloveimg-compressed
In this screenshot I am still using the SDFs as custom nodes, but the distance operations are done using another neat bit of HLSL hackery which will be in another article

The math behind them is also fairly straightforward:

Union:

The union acts as a lowerbound using the minimum, and hence biases the output towards the more negative values, meaning that, for example:

In the case of a sphere of radius 6 and a cube of side 4, both centered at the origin, the point (0, 5, 0), the result of their SDFs for this point would be -1 and 1 respectively, the union would use the minimum value and hence accept the sphere output of the SDF instead of the cube output. Hence BOTH the sphere and the cube are rendered where previously only one would be.

Intersection:

The intersection is the opposite of the union where the maximum is used instead and biases the output towards more positive values, so where the union would accept the volume generated by both volumes, the intersection only accepts the most conservative estimate of both volumes, i.e. the regions that are DEFINITELY within the SDFs of both volumes.

Subtraction:

Subtraction could be said to be a special case of the intersection where by negating the values of one SDF you bias the output towards accepting the values of only one output and completely excluding the other, which gives you subtraction of the volumes.

Domain operations:

Repetition:

12345
This was an operation I REALLY wanted to learn because who doesn’t love infinitely repeating M.C. Escheresqe stuff life this?:

HighresScreenshot00002.png

An issue I ran into though is that the GLSL mod and HLSL fmod aren’t completely equivalent, which is very nicely explained in this article.

I was able to perform the repetition, to a degree, but had weird artifacts like this:

DQyhai4VoAInCh_
To fix this, I added another function to the ‘Common.usf’ for a replacement of the GLSL mod and used that instead, which fixed the positioning errors in the repetition, but still had issues with the raymarcher for some reason:

DQ1NjdyUIAId2I3
The 9bitScience article had this warning about operations that act on the input position of the SDFs:

we.PNG

Which meant that by multiplying the result of the distance field with a scalar < 1, increasing the steps the raymarcher takes, and increasing the precision of the accepted distance of the raymarcher, I could finally get the results of the repetition operation correctly:

HighresScreenshot00002.png
The edges of some of the boxes at the edges are still raggedy though and that stumped me for a bit until I stumbled across this video:

This video showed me that the repetition function shown on Inigo’s article might be wrong or might not work in Unreal’s setup for whatever reason as at about 18-19 mins in he shows the implementation they use in their tool which includes a ‘+ 0.5 * spacing’ as a compensation to the domain position being supplied.

2345.PNG
This finally fixed everything for me.

HighresScreenshot00003.png

Rotation/Translation:

412
Translation isn’t a big issue considering that the raymarching is already attached to an object position and hence moves with the actor anyway.

There’s two issues with implementing the rotation operation though.

One is that matrices can’t be provided as input in the material editor, though I have seen some material functions for transforming vectors that take individual basis vectors which might work if you construct the matrix in code, HLSL documentation for which you can find here.

The other issue is that unlike GLSL, HLSL doesn’t have a built in function for determining the inverse of matrices, and has a transpose method instead, which if the model matrix is orthogonal would work but that isn’t the case if the model matrix encodes a translation or scaling, as those do not preserve orthogonality.

This Stack-Exchange answer explains nicely why the transpose wouldn’t work for a model matrix.

Capture

However, I managed to hack in a limited amount of rotation functionality using some HLSL shader constants that are set per material/object before the shader is evaluated.

These are called ‘Primitive uniform shader parameters’ in Unreal and for people more familiar with GLSL (as I am), these are more simply called ‘uniforms’.

The documentation for them is decent though and lists all the ones you can use and a short comment about the purpose of the parameter.

The one you need here is ‘Primitive.LocalToWorld’, a matrix variable that contains the model transforms we need, in particular the rotation transform.

For more information on matrix transforms, this link has been my go-to reference for years now.

12.PNG
Keeping in mind that scaling can mess up distance fields (as its an operation that doesn’t preserve vector length), you also need to set the scale values in the model matrix back to 1 so that they don’t affect the SDFs you use them on.

After that its just a matter of multiplying it against the position input for the SDF you want to be rotated, and you have rotation of the SDF when you rotate the object!

The rotation doesn’t seem to play well with the raymarcher extreme angles (as can be seen in the gif below), it also seems to not work well with the domain repetition operation if you rotate stuff beyond a couple of degrees.

Rotation.gif
It’s being applied to an individual SDF object here, not both of them


Using SDFs with non-Euclidean norms:

Capture.PNG

These can be implemented pretty easily, just make a new length function that also takes a ‘norm’ float argument in ‘Common.ush’ and you’re good to go, you can call it from other SDF functions that you add for the shapes using the new norms.

41

55.PNG
Torus88

126
The inverse multiplication inside the primitive call confused me, but I read on a forum thread on pouet that it is to counteract the scaling of resultant distance which messes with the distance field. The reason for this being rotation and translation preserve the length of vectors, but scaling does not and Inigo mentions this in his article also.

One issue with this operator is that its hard to generalize, because it needs to be applied to both the inside and the outside of a primitive call, for each distance field operation. Thus it needs to be invoked on an individual basis for each SDF as opposed to having a function to call to do it a single time.

To6Hcuj862.gif


Deformation operations:

65.PNG

Displacement should be obvious, it serves to distort the primitive SDF in some manner, so I think its just a matter of trying out different functions and seeing what works for your intended look.

125
In the image above I’m using the displacement function Inigo mentions in his article.

The Blend operation’s purpose seems to be to eliminate the discontinuities from the union operation, but more simply it’s a function that allows for smooth interpolation between SDFs where the union would do a hard joining operation.

giphy

Twist and Bend:

123
The problem with these last two operators is that they aren’t giving the expected results, even with changes to the constants being used and changes to the axii to account for the different coordinate systems.

What I get are odd smeary results from the twist operation:

155
156.PNG
And results that kind of look like they’re working at some viewing angles but fall apart at others from the Bend operation.

Will keep tweaking and update this post if I get them to work right, but for now that more or less does it.

Feel free to reach out to me @nightmask3 on Twitter!

Thanks for reading!

Advertisement

Interactive Material and Element system from Breath of the Wild

One of the games that came out recently that surprised me in terms of clever design and depth of its gameplay systems was Legend of Zelda: Breath of the Wild.

tumblr_o8ui2sXPJG1vxh6q9o1_500.gif

After watching the GDC 2017 talk given by the three lead developers of BotW, in particular the section given by the lead programmer:

This talk blew my mind when I first saw it.

The simplicity of the system, and the potential that it possessed, made this method seem like something every game going forward that wants to have an fully interactive world should have, in my opinion.

I started thinking about how you might implement something like this in Unreal, and moreover how you might implement it in a way that allows for ease of use and scaling as you added more materials and elements.

This also coincided with the Unreal Summer Game Jam, and I decided to take the opportunity to create a small game/proof of concept for this system:

This is what I came up with:

ezgif.com-video-to-gif (1).gif
The rules of this system are stated quite simply and don’t leave much room for error:

1. Materials can interact with Elements
2. Elements can interact with other Elements
3. Materials cannot interact with other Materials

The system as I implemented it in Unreal has a few basic pieces:

1) Material components – Registered as a C++ base class, this is subclassed into various different Material types like WoodMaterial, MetalMaterial etc in Blueprints.

Material
2) Element components – Registered as a C++ base class, this is subclassed into various different Element types like FireElement, WaterElement etc in Blueprints.

Element

3) Material/Element states – Every component has a state enumeration variable, which by default is “Active” and represents that material/element after being exposed to some element, eg. “Burning”, “Wet”, “Electrocuted” etc. (these are Material state examples, I haven’t quite found a use for Element states yet beyond “Active” and “Disabled” but I’m sure they’ll come in handy at some point)

4) Material and Element types – Another enumeration variable used to identify various types of elements and materials. This variable is chosen at design time.

StateTypeClass
5) Material Class/Element Class – Another variable that is chosen at design time, this is used to know what subclass of material/element to actually use/spawn. This is also why there is a ‘CreatedMaterial’ or ‘CreatedElement’ variable, as the component in the details panel is only a base Material/Element type, and the actual component that you need is created at runtime and stored in ‘CreatedMaterial’ or ‘CreatedElement’ respectively.

I’m fairly certain there’s better ways to do this that aren’t as hacky, but it worked for me.

It’s also a bit unintuitive to have to select both a ‘Wood’ type and a ‘WoodMaterial’ material class, you’d usually expect to do one or the other, but not both, however those values are needed by the system for proper functioning, in different places.

It’s probably something you can abstract away with some boilerplate if checks, I just didn’t find it necessary to implement for a prototype.

An unexpected benefit of doing it in this somewhat unintuitive way is that you’d never need to change your Material or Element component classes when you add new types of Materials/Elements. If you were to do if-checks based on the ‘Type’ being selected, say, then you’d need to add a new if-check for every type of material/element you add, which can quickly get out of hand. The same would apply vice-versa if you were using the Class instead.

This way introduces greater chance of user error, having one of those two properties not selected will cause silent errors that are hard to find, but if that’s a tradeoff you can live with, more power to you.

6) Blueprint subclasses of Material/Element – Having the actual Material/Element behaviors be defined in BPs, allows for designers to utilize some handy flow control systems that exist in BP and allow for intuitive and scalable scripting of the interaction between different Materials and Elements.

A picture is worth a thousand words, so here’s what the BP of the wood Material component looks like:

BP

What’s happening in the gif is that the Player is tagged as a wood Material, and walks into a volume tagged as a fire Element.

This causes them to catch on fire (as wood in fire typically does),  and when that’s done it sets the player to a ‘burning/on fire’ state.

123.PNG

This allows for different objects to own Material or Element components, have a type (Material example would be wood, Element example would be fire), and then allow for designers to implement different types of response behavior for when an Element and Material interact, in Blueprints.

Having state values associated with both Materials and Elements, also means you can script additional or altogether different interactions based on the state of the Material/Element in question, for eg:

A wood material in its normal active state on contact with water does nothing, but a wood material in a ‘burning’ state on contact with water, has the fire removed.

d1cd541d82a24cfe97726178b31f57eb.gif

It also uses the neat visual flow control of the ‘Switch-On-Enum’ nodes to keep things simple and compartmentalized for designers.

How the actual functions of the Material/Element components are triggered is done using the BeginOverlap events from colliders, which looks something like this:

1234.PNG

Something I did for ease of use was to create a BP class which has a collider, Static/Skeletal mesh/Particle system and Material/Element component, and then designers can easily create new interactive objects by inheriting from this BP and switching around the mesh/particle system and selecting different Material/Element types.

This works okay for demo purposes but in a production environment you’d probably make those as C++ base classes too, as there’s nothing happening in the interactive item base classes that you couldn’t easily transcribe to code.

So essentially, any class that wants to use the Material/Element components, just needs to have a collider, and hook up the BeginOverlap events of that colllider to the Material/Element ‘OnStateChange’ events and provide the appropriate arguments from the object that caused the overlap.

Here’s what that looks like for a typical interactive object:

1235.PNG
This is an Element item, which is why it checks for both Element and Material state changes from the other object. A Material item would only check for Element state changes. (remember the three rules!)

 

A final tidbit, though it doesn’t have anything to do with the interactive Material/Element system persay, is for how to get particles to spawn all over a skeletal mesh, which is an invaluable part of the Cascade particle system and is necessary to sell the effect of things interacting in my opinion.

fire
The circled out Cascade emitter is where all the magic is, the details panel in the lower left is where you choose the parameter name that will point to the target actor of this particle effect.

When you want to actually create the effect, it’s just a matter of spawning the appropriate particle system, and setting that parameter using the name you assigned within the particle system earlier, to the actor that you want to be covered in fire/water/etc.

12356.PNG

That’s it for this post, hope that this provided some insight into how to make objects feel more interactive and get more of that lovely ‘game juice’ feel into your games that Nintendo manages to do time and again.

Hope this is helpful!

I’m on Twitter and you can reach me @nightmask3 if you need any help or have a clarification.

3D Physics Engine using GJK, EPA and Sequential Impulse Solver

ac30ccbfcbacf1cabaf88eb85cf815af.gif
GitHub repo can be found here: https://github.com/Nightmask3/Physics-Framework

I had to make a physics engine for one of my classes at DigiPen (CS550 – Physics Simulations).

The requirements of this physics engine were:

1) Have a broadphase collision detection implementation
2) Have a method of resolving collision
3) Must use three dimensional rigid body physics

Overview

The engine uses an Entity-Component system that communicates using an Observer pattern.

A good primer on the observer pattern can be found here: http://gameprogrammingpatterns.com/observer.html

Everything in the engine that wishes to listen to events, must inherit from the interface/abstract base class Observer.

BaseHierarchy

A code snippet of the Observer:ObserverEverything that wants to send events must own a Subject for that event:
Subject.PNG
Subjects can have lists of Observers who subscribe and listen for events to fire in the future, as well as send one-time “fire and forget” events to Objects as well.
This allows for event-driven communication between Observers and Objects (which most things in the engine are).

All the managers in the engine derive from Observer, allowing for them to listen to events from GameObjects, the Engine, components etc.

ManagersThe component hierarchy is fairly typical:

Component.PNGIntegration:

By default, the engine uses a semi-implicit Euler integration method. RK-4 was implemented, but never required to be used. Same goes for a Verlet integrator that was also implemented.

There are some caveats to integration, such as where colliders that are marked as ‘Static’ are not integrated. This is an issue of poor component design where I require objects that don’t really need physics (namely static objects) to still own one as they contain some data that is necessary for collision detection.

An improvement to this would be to decouple the Collider and Physics components dependencies so that a Collider could function without a Physics component also belonging to the owner.

dc8b1c2d163317c581e82101c3e560bf.gif
Collision Detection:

The Gilbert-Johnson-Keerthi (GJK) algorithm is used to provide a Boolean check of whether the two convex shapes that are being tested are intersecting. The strength of GJK is that it can be manipulated completely based on the type of support function (or) support map being used.

Originally, I was using the naïve method of looping through all vertices and getting the dot product of each vertex against the search direction and finding the vertex furthest along that direction. However, for a shape with the spatial symmetry about all three axes (i.e. a cube or a sphere) it is possible to use another method involving an ‘Extents’ approach, which just multiplies the sign of the search direction against the half size of the cube (or the radius of the sphere). In this engine I have only implemented cubes, so this method proved sufficient.

3bf5069dc45722bc50635596d1f7fa04.gif
EPA:

If/when GJK finds a simplex that encloses the origin, it passes the simplex onto EPA which attempts to locate the closest point possible to the origin in the Minkowski space, which should give the contact point, in theory.

In practice, due to the limitations of 32-bit floating point precision and math, EPA is very prone to numerical error, and as Dirk Gregorious as stated on various forums, is a method he does not personally like. I began to see why when I encountered some situations whose blame falls squarely on the imprecision of EPA and its inability to return a consistent contact point or one that can be guaranteed to be close to the center of mass.

It’s my belief that persistent manifolds in fact are a way to make up for the imprecision introduced by EPA into the resolution process, though also the issue of error in impulse solvers exacerbates this.

When EPA returns a point that is within a tolerable limit of the origin, the contact point is extrapolated using barycentric projection. There is an iteration limit of 50 to prevent the EPA from cycling infinitely.

Collision Resolution:

a34d3109ce992adf4334798e88107465.gif

The contact that is generated by EPA is registered using a contact/collision constraint.

This constraint is solved using a method given by Erin Catto in his paper “Iterative Dynamics using Temporal Coherence”. It involves a sequential impulse method that comes down to a linear complementarity problem that must be solved using a numerical solver of some type.

The type of solver used in the paper and here is a projected Gauss-Siedel solver, which is shown to converge faster for this use-case than the Jacobi-Hamilton solver.

The solver performs velocity correction to ensure the velocity-level constraint is satisfied and objects will no longer move into each other, and the penetration is resolved using a Baumgarte stabilization method, which massages the constraint forces to get them to do “virtual work”.

There is also a slop term that is provided on the Baumgarte stabilization to allow objects to penetrate a bit before we apply the Baumgarte term.

Issues:

Lack of persistent manifolds and numerical issues with GJK and EPA mean that there are still some penetrations that can happen in some situations.

Additionally the issue of jitter can cause penetrations to happen. I plan on continuing with the implementation of this engine at least until I can obtain stable stacking, just to teach myself as well as to have that on my portfolio.

Conclusion:

I am pretty satisfied with the final state of my engine. It’s far from perfect, but considering that last year I spent upwards of three months trying to implement OBBs using SAT and I failed utterly, the fact that I managed to implement a working engine with detection and response is honestly enough for me.

Videos of my gameplay programming work

This was developed during a student project at DigiPen, the game is an exploration-adventure game, and the character is interacting with the Jellyfish in order to solve a puzzle.

I implemented everything seen in this video, including:

  • Scripting in Blueprints for interaction between player input and player animation state machine
  • Scripting for the creature response to the player and movement AI
  • Scripting for the blue energy jump pad
  • All Character/creature/environment art, VFX, animations seen in the video

Part of the same project as the previous video, here the player completes a quest for an NPC by filling a pot with some water, and completes a puzzle by bringing a keystone to the lock-pedestal.

I implemented everything seen in the video including:

  • Scripting for interaction with the chest and objects the player can pick up (pot and the keystone)
  • All items that can be carried inherit from a Parent Blueprint with all the behavior to detect whether or not the player has attempted to pick it up, a Child Blueprint can then define what the object actually does when the player picks it up as well as the static mesh/collider etc.
  • Scripting, animations and animation state machine handling for the pickup and throwing actions
  • Scripting for interaction with NPC and the quest
  • Scripting for the puzzle
  • All art, VFX in the video except for the trail particles the player leaves behind.

An early prototype I made for a character movement ability. The player is creating this ice/crystal path in front of them using ‘mana’ as a resource to build it. The initial idea revolved around this as the short-term gameplay loop.

  • Implemented the scripting for the path creation on player input, it was built for a controller primarily and involved using the analog sticks to direct the path while holding down a button to create it.
  • Used the SplineMeshComponent provided in Unreal to define a spline that is updated dynamically in-game as the player extends the path. The component takes the spline and deforms a mesh along it, which gives the path its smooth and sinuous appearance.
  • The player could fly whenever they were above this path, allowing them to use it like an actual path/shortcut through the world.
  • Also implemented the survival gameplay seen in the HUD bars, which involve an Exposure stat that increased whenever the player was directly exposed to the sun, a Hydration stat that depleted over time and required the player to drink water or otherwise sustain damage to Health.
  • All art and VFX seen are created by me.
  • Did NOT implement the character model or animations though, those are provided by Unreal in the Third Person Template.

 

Fluid Simulation using SPH and OpenCL

Here’s a video of a fluid simulation I made:

This post is going to talk about Fluid Simulation using Smoothed Particle Hydrodynamics(SPH) implemented using OpenCL 1.2.

If you don’t want to read any of this and get right to the code, here it is in “SPH_v1”.

This post is not intended to be a tutorial, but a demonstration of my implementation, though I will include links to the sources I used and hopefully that will prove helpful to someone else.

SPH:

Wikipedia defines SPH as “a computational method used for simulating the dynamics of continuum media”, which is a fancy way of saying that it is an algorithm that can be used to model anything that flows or has fluid-like behavior. (and probably some other stuff too, but that description covers most of it)

The nomenclature was first introduced by two papers, Gingold and Monaghan et al. and Lucy et al. both in 1977.

The paper that you’ll need to read in order to understand it’s applications to fluid simulation in video games/interactive media is the M. Muller, D. Charypar, and M. Gross et al. which can be found here.

For some background in fluid sim. in general, there are generally two approaches through which a fluid medium is described and hence simulated.

  1. Eulerian approach: This treats the fluid as a grid with the resolution of the grid defining how many points into the field are sampled and thus the resultant quality of the fluid simulation. This is simple to implement and algorithms like the Shallow Water Equations make use of it to great effect and run cheap while doing it. The limitation however is with imposing boundary conditions for grid-based solutions and the requirement of a small timestep in order for the simulation to not “explode”.
  2. Lagrangian: This treats the fluid as a collection of discrete particles where each particle has its own mass, position and velocity. The solver performs an all-pairs interaction force calculation, modeling two forces and using the principle of superposition (read : adding them together) in order to arrive at the final force acting on each particle. These forces are the force due to pressure and the force due to viscosity. Surface tension and external forces like gravity can also be included in this force calculation in order to allow for interaction with the fluid system.

The M. Muller paper describes surface tension, but this implementation does not include it. The SPH explanation requires understanding a few things like smoothing kernels and the Navier-Stokes equation, but if you want a source that skips all that and directly describes the code to implement it, here’s another link that I found extremely helpful.

OpenCL:

OpenCL is a compute-layer language that runs on the GPU in order to implement functionality that can benefit from being executed in parallel. OpenCL programs are called “kernels” and each kernel runs on a processor in the GPU, in groups. The hardware specifics are quite complicated but suffice to say that it’s kind of like a shader that isn’t meant to render anything, but rather perform computations that involve a lot of math, which GPUs excel at.

Incidentally, fluid simulations require a lot of math, making them a prime candidate for algorithms that would benefit from parallel execution.

I’ve chosen OpenCL as opposed to the alternative (NVIDIA’s proprietary language CUDA) because I wanted a portable solution that wouldn’t be locked to any single vendor. However that decision also dictated my choice of which version of OpenCL to use (v1.2) as that is the latest version of OpenCL that NVIDIA has provided support for on all their hardware (for reference OpenCL is at v2.2 right now).

The sources I used in order to learn OpenCL are:

  1. https://simpleopencl.blogspot.com/2013/06/tutorial-simple-start-with-opencl-and-c.html
  2. http://enja.org/2010/07/20/adventures-in-opencl-part-1-5-cpp-bindings/
  3. https://github.com/enjalot/EnjaParticles
  4. https://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf

It can be a bit of a headache to get OpenCL to work, but the result is worth it, as the maximum number of particles I could achieve with all calculations on the CPU (with 30 or above FPS) was around 1K, but once I switched all computations to the GPU I was able to max out at 16K particles or so and maintain an appreciable framerate. (On my GitHub page it says 4K but that was with an older PC, right now I am running on an i7 with 16GB of RAM, and a GTX 970 with 3.5GB of VRAM.

Areas for improvement:

  1. My implementation still uses a brute force all-pairs interaction force calculation which is a definite place to be optimized by using spatial partitioning of some sort.
  2. I was looking into extending this into 3D and implementing a grid hashing solution.

 

Unlit

ABOUT:

I worked on a team of 4 programmers (including myself) to build a 2.5D Platformer called ‘Unlit’ in a time period of 4 months. The engine we built is called the Whisky Engine and is written using C++ with a few libraries like OpenGL, GLM, SFML etc. The engine is built on a component-based design.

MY ROLE:

  1. Physics programming:
    a. Implemented the physics engine.
    b. Used SAT to implement AABB colliders
    c. Also implemented were colliders for spheres and planes, and raycasting.
  2. Gameplay programming:
    a. Implemented, tested and fine tuned player input.
    b. Unlit is a platformer, so most of the gameplay revolved around input and how the player can traverse the world.
  3. Level design:
    a. Implemented 3 out of the 4 levels that we had in the final release of the game. Did so using our custom built level editor.
  4. Environment Artist:
    a. Modeled and textured all assets except for the main character. Used Blender, 3DS Max and Photoshop.

CREDITS:

1) Egemen Koku: Tools Programmer/Engine Programmer
2) Lisa Sturm: Producer/UI Programmer/Gameplay Programmer
3) Sai Narayan: Physics Programmer/Level Designer/Gameplay Programmer
4) Volkan Ilbeyli: Graphics Programmer/Engine Programmer

SCREENSHOTS:

This slideshow requires JavaScript.

A* algorithm demonstration

Embedded above is a video demonstrating the features of pathfinding and terrain analysis I implemented. These include:

1) A * pathfinding
2) Djikstras algorithm pathfinding
3) Catmull-Rom splines for path smoothing
3) Pathfinding with different heuristics (Euclidean, Chebyshev, Octile, Manhattan)
4) Terrain Analysis (Cover, Visibility, Visible to Player)
5) Pathfinding with Fog of War
6) Pathfinding with influence map of terrain analysis

The framework for control of the character (State Machine Language) is owned by Steve Rabin, the character in the video (Tiny) and all other assets belong to Microsoft.

Additionally the project itself is property of the Digipen® corporation.
DigiPen® is a registered trademark of DigiPen (USA) Corp.
Copyright © 2016 DigiPen (USA) Corp. and its owners. All Rights Reserved.

Cover image for this post taken from here:
http://mnemstudio.org/ai/path/images/a-star1b.gif

 

Building SFGUI into a static library

Recently I had to integrate SFGUI into one of my projects, and I discovered that this particular library did not come with precompiled headers or prebuilt static/dynamic libraries.

This was a new challenge for me, and it took me a bit to wrap my head around it how to get it done. Up until now I had never needed to build a library from source. But the process itself when examined is relatively simple.

I decided that I would build the source into a static library for ease of use. Dynamic linking is nice, but an unnecessary complication for my purpose.

Building a static library requires two things:

  1. Using CMake to generate the project files.
  2. Building the project files themselves in order to obtain the shared library.

CMake should require no introduction, it’s a popular meta-build tool. It operates through the use of Makefiles, the exact specifics of which are still arcane to me, but the process of writing one shall be a subsequent blog post so check this space later!

Building the project files can be done in an IDE like Visual Studio (which is what I used) or a compiler suite like GCC.

Download CMake here:
https://cmake.org/

Download SFGUI source here:
http://sfgui.sfml-dev.de/

It goes without saying that SFGUI had a dependency on SFML, so SFML will need to be integrated into your development environment in order for SFGUI to function, as well as linked to the CMake build process in order for the build to work.

CMakes’ GUI makes things pretty simple:

Capture

The Source and Destination of the build are specified as the same directory here. I preferred this so as to have my newly built library easily found in the same place as the SFGUI source, but this isn’t necessary.

Once these paths are specified, hit ‘Configure’ in order to get CMake to begin analyzing and listing the options for the build.

There will be some errors, but thats okay! We need to set up CMake in order to build the project files, so the errors are just informing you of settings that haven’t been set yet.

Make sure ‘Grouped’ and ‘Advanced’ (The checkboxes in the top right) are ticked so as to access a more organized view of the settings as well as ALL the necessary details.

 

In order to add an entry hit the button with the plus sign on it, to the right of the checkboxes.

The CMake settings group remains mostly unchanged except for the addition of:

CMAKE_MODULE_PATH which is a PATH type variable and which needs to point the location of the makefiles in the SFML root directory of your installation. It will follow this format:

“*SFMLDirectory*\CMake\Modules”

The settings under SFML provide the location of the various libraries and include files of SFML and are similar to the integration of a library into a project using an IDE, so the process should be familiar.

In this situation I’ve left the dynamic library locations unfilled as we are not building a dll (dynamic linked library) for SFGUI.

For a static library of SFGUI, we require the static libraries of SFML. Pretty straightforward.

Obviously, replace the directory of SFML to where ever the root directory of SFML is in your own system.

Capture.JPG

This is what your settings under ‘Ungrouped Entries’, ‘OPENGL’ and ‘SFGUI’ should look like:

Capture

After all this is setup, hit Configure once again to check if there are any errors, if not hit Generate, choose the compiler toolchain you’d prefer to use, (SFGUI requires C++ 11 features so using a version of Visual C++ that is ahead of 2013 seems to be necessary), and the process should be done shortly.

This generates the project files in the SFGUI directory. Open this project in the IDE of your choice and build it. In my build setup there were still some errors, but the process succeeded. I have yet to see if these errors will affect me in some way. Hopefully not.

After this, the library should be found in the directory of SFGUI, under the ‘lib’ subfolder.

Link this into your project environments as you usually would, after specifying the include directories as always.

Hope this helps!

Converting from Decimal To Binary

I was on StackOverflow the other day and saw a question posed about how one might convert from Decimal to Binary, when the initial information is stored in a string. It seemed like a fun little program to take a whack at, so I did. I’ve posted my answer as well as the code solution below:

Image obtained here:

http://pictures.4ever.eu/cartoons/binary-code-161219

The original question and my answer can be found here:

http://stackoverflow.com/questions/34381002/is-there-a-way-to-convert-a-number-represented-as-a-string-to-its-binary-equiv/34381419#34381419

————————————————————-

Okay let’s break down the process you require here. (only one of an infinite number of ways to do this)

1) Conversion of a number represented as a string type into an integer type.

2) Conversion of the intermediary integer type into a binary number which is held in another string type. (judging by the return type of your function, which could just as easily return an integer by the way and save the headache of representing the binary equivalent as a string)

For step 1: Use the standard library function stoi. It does what you might imagine, extracts the numerical data from the string and stores it in an integer.

http://www.cplusplus.com/reference/string/stoi/

std::string numberstr = "123";
int numberint = std::stoi(numberstr);
std::cout << numberint << "\n";

Now you have the number as an integer.

For step 2:

1) This process involves the conversion of a number from base 10 (decimal) to base 2 (binary).

2) Divide the number by 2.

3) Store the remainder and the quotient of this division operation for further use.

4) The remainder becomes part of the binary representation, while the quotient is used as the next dividend.

5) This process repeats until the dividend becomes 1, at which point it too is included in the binary representation.

6) Reverse the string, and voila! You now have the binary representation of a number.

7) If you want to handle negative numbers (which I imagine you might), simply perform a check before the conversion to see if the converted integer is negative, and set a flag to true if it is.

8) Check this flag before reversing, and add a negative sign to end of the string before reversing.

The final function looks like this:

std::string str_to_bin(const std::string& str)
{
std::string binarystr = ""; // Output string

int remainder;
int numberint = std::stoi(str);
bool flagnegative = false;
// If negative number, beginning of binary equivalent is 1
if (numberint < 0)
{
    numberint = abs(numberint);
    flagnegative = true;
}
// If number is 0, don't perform conversion simply return 0
if (numberint == 0)
{
    binarystr = "0";
    return binarystr;
}
std::cout << numberint << "\n";

while (numberint != 1)
{
    remainder = numberint % 2;
    numberint /= 2;
    std::ostringstream convert; // stream used for the conversion
    convert << remainder;      // insert the textual representation of 'remainder' in the characters in the stream
    binarystr += convert.str();
}
std::ostringstream final;
final << numberint;         // To insert the last (or rather first once reversed) binary number
binarystr += final.str();
if (flagnegative == true)
    binarystr += "-";
std::reverse(binarystr.begin(), binarystr.end());
return binarystr;
}

Procedural Terrain Generation

Hello there!

It’s been a good long while since I’ve last posted, but I’ve been incredibly busy with college. Have my finals going on right now, almost done with my bachelor degree in computer science.

I’m going to start by just putting this incredibly pretty terrain here:

Output

If you’re willing to read it to the end, this post will teach you how to create this sort of terrain using C++.

I’m aware that there are programs that can achieve the same result available for free such as L3DT (which is actually used in this project), which are much better than this humble program and much more convenient to use, but I’d like to imagine that this simplifies the process of procedural terrain generation a little bit by reducing the complexity required to understand how to operate L3DT. If anyone is a beginner like me to the topic of procedural generation, perhaps this will be of help to them as well.

This is a simple console program that allows the user to create heightmaps. A heightmap is a file that can store surface elevation values, which is a complicated way of saying it stores height information. This height information can be used to model terrain in 3D game engines and some 3D modelling software.

What I’ve created was actually done for my final year project. And to give credit where it’s due it’s mainly based off a paper written by Jonathan Roberts and Ke Chen, Senior Members, IEEE

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6853332

The above is a link to the abstract and IEEE page of the paper

My project can be summarized simply as:

An attempt to implement the Perlin noise algorithm and machine learning concepts and use them to develop a tool that allows even a layman to generate procedural terrain that can be exported as a Photoshop RAW file and used as a heightmap in any game engine such as Unreal or Unity.

To break that down:

1) Perlin Noise: A nifty algorithm created by Ken Perlin, this allows for the creation of noise that has a natural, organic and non repetitive appearance, which gives it a resemblance to a variety of natural objects such as clouds, fire, and terrain. This means Perlin noise textures can be used as heightmaps. I have used the excellent open source library Libnoise to implement Perlin noise.

https://mrl.nyu.edu/~perlin/ : Website of Ken Perlin.

2) Machine Learning: A method used to impart pseudo-artificial intelligence to the program by allowing it to take in input (feedback of user) which alters the output. Every iteration of feedback input improves the results of the previous stage. It does this by identifying a favorable content category and constraining future output to that category.

3) Exporting the heightmap as a Photoshop RAW file: Using the amazing tool L3DT(Large 3D Terrain Generator) to view the output of the terrain generation, user can decide if it matches their needs. User is presented with option to finalize output, modify it some way (more mountainous, flatter, hillier), or accept the output but request a refinement. When they choose to finalize, the heightmap is converted to Photoshop RAW format (again using a script in L3DT, all credit to Aaron Bundy). This RAW file can be imported into Unreal, Unity i.e. level design softwares.

Apart from the above, I’ve also used the SFML libraries to create the GUI that displays the noise texture for user approval.

So in order to make this work, you’d have to integrate the libnoise and SFML libraries into your project environment to successfully compile the source code given below.

Feel free to clone the repository I made on GitHub using the following link:

https://github.com/Nightmask3/Perlin-Heightmap-Builder

Texture

Above are pictures of a Perlin noise texture generated by the program.

Once the heightmap has reached the satisfaction of the user, it can be exported in Photoshop RAW format, upon which it can then be imported into game engines like Unreal and Unity to be used to create terrain.

This is a student made project and it’s bound to have some bugs here and there, but I believe it is a good first foray into procedural generation, which is a topic I shall definitely be pursuing in the future.