Fluid Simulation using SPH and OpenCL

Here’s a video of a fluid simulation I made:

This post is going to talk about Fluid Simulation using Smoothed Particle Hydrodynamics(SPH) implemented using OpenCL 1.2.

If you don’t want to read any of this and get right to the code, here it is in “SPH_v1”.

This post is not intended to be a tutorial, but a demonstration of my implementation, though I will include links to the sources I used and hopefully that will prove helpful to someone else.

SPH:

Wikipedia defines SPH as “a computational method used for simulating the dynamics of continuum media”, which is a fancy way of saying that it is an algorithm that can be used to model anything that flows or has fluid-like behavior. (and probably some other stuff too, but that description covers most of it)

The nomenclature was first introduced by two papers, Gingold and Monaghan et al. and Lucy et al. both in 1977.

The paper that you’ll need to read in order to understand it’s applications to fluid simulation in video games/interactive media is the M. Muller, D. Charypar, and M. Gross et al. which can be found here.

For some background in fluid sim. in general, there are generally two approaches through which a fluid medium is described and hence simulated.

  1. Eulerian approach: This treats the fluid as a grid with the resolution of the grid defining how many points into the field are sampled and thus the resultant quality of the fluid simulation. This is simple to implement and algorithms like the Shallow Water Equations make use of it to great effect and run cheap while doing it. The limitation however is with imposing boundary conditions for grid-based solutions and the requirement of a small timestep in order for the simulation to not “explode”.
  2. Lagrangian: This treats the fluid as a collection of discrete particles where each particle has its own mass, position and velocity. The solver performs an all-pairs interaction force calculation, modeling two forces and using the principle of superposition (read : adding them together) in order to arrive at the final force acting on each particle. These forces are the force due to pressure and the force due to viscosity. Surface tension and external forces like gravity can also be included in this force calculation in order to allow for interaction with the fluid system.

The M. Muller paper describes surface tension, but this implementation does not include it. The SPH explanation requires understanding a few things like smoothing kernels and the Navier-Stokes equation, but if you want a source that skips all that and directly describes the code to implement it, here’s another link that I found extremely helpful.

OpenCL:

OpenCL is a compute-layer language that runs on the GPU in order to implement functionality that can benefit from being executed in parallel. OpenCL programs are called “kernels” and each kernel runs on a processor in the GPU, in groups. The hardware specifics are quite complicated but suffice to say that it’s kind of like a shader that isn’t meant to render anything, but rather perform computations that involve a lot of math, which GPUs excel at.

Incidentally, fluid simulations require a lot of math, making them a prime candidate for algorithms that would benefit from parallel execution.

I’ve chosen OpenCL as opposed to the alternative (NVIDIA’s proprietary language CUDA) because I wanted a portable solution that wouldn’t be locked to any single vendor. However that decision also dictated my choice of which version of OpenCL to use (v1.2) as that is the latest version of OpenCL that NVIDIA has provided support for on all their hardware (for reference OpenCL is at v2.2 right now).

The sources I used in order to learn OpenCL are:

  1. https://simpleopencl.blogspot.com/2013/06/tutorial-simple-start-with-opencl-and-c.html
  2. http://enja.org/2010/07/20/adventures-in-opencl-part-1-5-cpp-bindings/
  3. https://github.com/enjalot/EnjaParticles
  4. https://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf

It can be a bit of a headache to get OpenCL to work, but the result is worth it, as the maximum number of particles I could achieve with all calculations on the CPU (with 30 or above FPS) was around 1K, but once I switched all computations to the GPU I was able to max out at 16K particles or so and maintain an appreciable framerate. (On my GitHub page it says 4K but that was with an older PC, right now I am running on an i7 with 16GB of RAM, and a GTX 970 with 3.5GB of VRAM.

Areas for improvement:

  1. My implementation still uses a brute force all-pairs interaction force calculation which is a definite place to be optimized by using spatial partitioning of some sort.
  2. I was looking into extending this into 3D and implementing a grid hashing solution.

 

Advertisements

Vertex-based Animations using Morph Targets in UE4

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 8 months, and I wanted to share some of the techniques I had learned in this time.

The topic of discussion today is Vertex based animations.

Before we get to that, let us digress for a wider examination of animation pipelines in general, to gain a little perspective.

Animation is probably one the most important factors in making a game feel responsive and lifelike (not to be confused with realistic). Animation is in essence the art of making something static feel like it is in motion.

Human beings are great at perceiving motion and as such it’s one of the more prominent ways with which games engage players, both mechanically and aesthetically.

In 3D animation, there are a few broad categories with which animation is implemented, each with their own pipelines and quirks:

  1. Vertex animation: This used to be the most widespread way to animate things back when the first wave of 3D games came out and the hardware wasn’t usually capable of handling the complicated (and expensive) concatenated transformations of skeletal animations. It’s easy to do but hard to reach good results with. It also happens to be what we’re talking about today.
  2. Skeletal animation: This method is the most commonly used for 3D animation nowadays, and when supplemented with the powerful tool of Inverse Kinematics it can achieve really smooth and organic results. It’s probably the most flexible out of all of these methods, and the sheer number of tools for this pipeline make it a very attractive option.
  3. Procedural animation: This method is a little less commonly seen, and I can guess that’s because it is probably non-trivial to implement and has very specific use cases as opposed to the other two methods. Some games that use it to good effect include Grow Home, QWOP, Overgrowth and so forth.
  4. Physically-based Simulation: Similar to the procedural animation though slightly different, this is animation that utilizes simulation of some natural substance like fluid, cloth in order to create the animation of the mesh. A popular game that uses this method is Journey with its cloth creatures and main character created almost entirely out of cloth.

With that out of the way, let us explore how to get vertex animations in UE4.

The method that is described here involves morph targets.

In a morph target animation, a “deformed” version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions.

That’s fairly simple to understand. And you can see why it falls under the blanket of a vertex animation.

The kind folks at Epic have written a MaxScript that enables any Editable Poly mesh with keyframed animations to have its animations written out into a texture. The only downside to this method is that the mesh can have at max 8192 vertices.

In your Engine installation folder look for “Epic Games\EngineVersion\Engine\Extras\3dsMaxScripts\VertexAnimationTools”

It’s straightforward to use and this tutorial will explain the details behind it and the material you will need to create in order to view this animation in the engine:

But just in case, here’s a screenshot of that material too:

whisky11-debug

Hope this is helpful to someone!

Unlit

ABOUT:

I worked on a team of 4 programmers (including myself) to build a 2.5D Platformer called ‘Unlit’ in a time period of 4 months. The engine we built is called the Whisky Engine and is written using C++ with a few libraries like OpenGL, GLM, SFML etc. The engine is built on a component-based design.

MY ROLE:

  1. Physics programming:
    a. Implemented the physics engine.
    b. Used SAT to implement AABB colliders
    c. Also implemented were colliders for spheres and planes, and raycasting.
  2. Gameplay programming:
    a. Implemented, tested and fine tuned player input.
    b. Unlit is a platformer, so most of the gameplay revolved around input and how the player can traverse the world.
  3. Level design:
    a. Implemented 3 out of the 4 levels that we had in the final release of the game. Did so using our custom built level editor.
  4. Environment Artist:
    a. Modeled and textured all assets except for the main character. Used Blender, 3DS Max and Photoshop.

CREDITS:

1) Egemen Koku: Tools Programmer/Engine Programmer
2) Lisa Sturm: Producer/UI Programmer/Gameplay Programmer
3) Sai Narayan: Physics Programmer/Level Designer/Gameplay Programmer
4) Volkan Ilbeyli: Graphics Programmer/Engine Programmer

SCREENSHOTS:

This slideshow requires JavaScript.

Cloth in UE4

 

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 – 8 months, and I wanted to share some of the techniques I had learned in this time.

The topic we are going to explore today is Cloth, and how to implement it in Unreal 4. Before we do that, let us examine some of the purposes that cloth is applied to within games.

Cloth is an extremely important part of a game developers toolbox. It is a material that exists in abundance in reality, and regardless of scenario and settings, every game can probably make some utilization of cloth, or use cloth physics to achieve some convincing visual effect (I’ve seen it used for hair as well).

The screenshot below is of the game ‘Journey’, a personal favorite of mine, and a wonderful game that takes cloth simulation and applies it to creatures, mechanics, architectural flourishes and even the main characters entire body.

journey_12

 

How Unreal 4 implements cloth is using the APEX Physics SDK. The technical details of this implementation are beyond the scope of this post, but suffice to say that in order to bring cloth into Unreal 4, you will need to use either the standalone APEX cloth tool or the APEX cloth plugin built into Max or Maya.

https://developer.nvidia.com/gameworksdownload#?dn=physx-apex-sdk-1-3-0

You will need an NVIDIA developer account to download it. It’s a quick and free registration thankfully.

This article isn’t a tutorial on how to implement cloth, only a demonstration of it, but here’s a link to the tutorial I used:

https://www.youtube.com/watch?v=uTOELBNBt04&t=1003s

He makes use of Blender to attach, rig and skin the cloth mesh, before exporting it into the APEX cloth tool.

In the cloth tool, the cloth simulation is defined by painting the max displacement values directly onto the cloth mesh. The character model must also have collisions generated for it so that the cloth can collide with the character model.

The simulation can be previewed and the environment settings tweaked to observe how it reacts under different parameters. The simulation asset can then be exported and saved as a separate file.

Import this asset into Unreal and then apply it at the Material level. The cloth should be working now.

Some important tips that hopefully will save you the time I spent tearing my hair out when I couldn’t figure out why the cloth mesh disappeared as soon as I applied the physics asset:

  1. Skin the cloth mesh. If you don’t, bad things will happen.
  2. Make sure the cloth mesh is rigged to the appropriate bone. If you don’t, bad things will happen.
  3. Make sure to disable “Add Leaf Nodes” in the Armature tab of the .FBX export settings in Blender. If you don’t, bad things will happen.

Hope this is helpful to someone!

Ground Fog in Unreal 4

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 months, and I wanted to share some of the techniques I had learned in this time.

My previous post dealt with implementing Distance Fog using a Post-Process material.

This time, we are going to explore how to implement a Ground Fog in Unreal 4.

Ground Fog is very important for a variety of reasons.

Here is the same scene as above, without the Ground Fog.

groundnofog
Scene without Ground Fog

A couple of things you can notice:

  1. The scene still looks okay, but overall lacks any visual complexity.
  2. The color of the sand is now too repetitive and dominates the view.
  3. Less easy to differentiate between the foreground and background, the distance fog that we see helps somewhat though.

So, we can see how Ground Fog can add to the overall aesthetic of a level. Let us now proceed to the implementation itself.

fogtexture
Get UV values for Fog Texture

This set of material nodes, is responsible for raycasting forward a certain distance (ML_Raycast), finding a world position and scaling that by NoiseSize.

raycast
Raycast Material Function

This world position is then fed into MF_NormalMaskedVector and what is obtained then is a UV value for the moving fog texture, by masking the input WorldPosition with the Vertex normal.

normalmaskedvector
Normal Masked Vector Material Function

The output of the moving fog is then multiplied (in my case I use Add, it works for this case but might get weird results otherwise), and then you use a if statement to define a World-Z cutoff for the fog.

If the worldZ of the vertex being drawn is lesser than the cutoff value, we multiply the moving fog color value into the post-process output. You can think of this as a simple if-conditional check.

if(PixelWorldZ < CutoffValue)
DisplayFog();

Then, in order to make the fog fade smoothly until the cutoff value is reached, we use another if statement to check the distance between the cutoff Z value and the current pixel world Z value. If the pixel is within the gradient fade range (as defined by GradientRange), then we lerp between the color output of the fog and 1.

if(WorldZCutoff – PixelWorldZ < GradientRange)
LerpBetweenFogColorAndSceneColor();

If the output is 1, we use only the scene color.

groundfogmaterial
Gradient and final material output

The final output of all this is multiplied into the PostProcessInput, and then fed into the Emissive Color.

This material uses the Post-Process material domain. Assign it to a Post-Process Volume, and you should be good to go.

Hope this is helpful to someone!

 

The Village

ABOUT:

The videos above demonstrate some of the things I implemented during my time working on ‘The Village’, a third person exploration-adventure game made using Unreal Engine 4.

MY ROLE:

  1. Level Design:
    a. Creation of base heightmaps in 3DS Max and terrain sculpting using the Landscape editor
    b. Asset integration and placement
    c. Cutscenes and cinematics
    d. Ambient sounds
  2. Technical Art:
    a. Sculpting and UV mapping of all assets seen in the video using  zBrush/Blender/3DS Max, texturing using Photoshop with Quixel Suite, and xNormal for baking normals, was the sole artist on the team
    b. Made particles for various effects ranging from dust, water, footsteps, interaction feedback effects, creature and flora particles, trails etc.
    c. Created all materials and material effects seen in the video.
    d. Implemented various post processing effects incorporating both inbuilt tools like the Exponential height fog, Reflection Captures, Post Process Volume settings as well as custom effects like Distance fog, Ground fog, outline effect etc.
    e. Setup the lighting, this game relies on primarily static lighting using Lightmass with a few Dynamic lights here and there
    f. Created all character/creature animations
    g. Made the Cloth and Fluid simulation assets in external tools (APEX cloth and Blender)
  3. Gameplay Programming:
    a. Scripting for interaction with objects, NPCs and creatures
    b. Quests and puzzles
    c. Jellyfish movement AI
    d. Animation state machine programming
    e. Player movement system
    f. Camera programming

CREDITS:

  1. Christopher Blake – Sound Designer
  2. Egemen Koku – AI Programmer
  3. Nicholas Esclapon – Game Designer
  4. Mariojulio Zaldivar – Producer/Gameplay Programmer
  5. Sai Narayan – Producer/Level Design/ Gameplay Programmer/ Technical Art

SCREENSHOTS:

This slideshow requires JavaScript.

A* algorithm demonstration

Embedded above is a video demonstrating the features of pathfinding and terrain analysis I implemented. These include:

1) A * pathfinding
2) Djikstras algorithm pathfinding
3) Catmull-Rom splines for path smoothing
3) Pathfinding with different heuristics (Euclidean, Chebyshev, Octile, Manhattan)
4) Terrain Analysis (Cover, Visibility, Visible to Player)
5) Pathfinding with Fog of War
6) Pathfinding with influence map of terrain analysis

The framework for control of the character (State Machine Language) is owned by Steve Rabin, the character in the video (Tiny) and all other assets belong to Microsoft.

Additionally the project itself is property of the Digipen® corporation.
DigiPen® is a registered trademark of DigiPen (USA) Corp.
Copyright © 2016 DigiPen (USA) Corp. and its owners. All Rights Reserved.

Cover image for this post taken from here:
http://mnemstudio.org/ai/path/images/a-star1b.gif

 

The Nightmare

ABOUT:

The Nightmare is a first person horror platformer made in a period of 3 months using Unreal Engine 3 by a team of 20 students. I was responsible for the initial concept and the creative direction of the project as a whole. I had an additional role as a level designer.

MY ROLE:

Level Design:

  • Kismet scripting for gameplay events
  • Matinee camera control for cutscenes
  • Materials with Dynamic lighting using Unreal Light Functions
  • Boss fights
  • Sound cues and dialogue
  • Character Animation (also through Matinee and Kismet)
  • Camera effects
  • Lighting
  • Asset integration
  • World design
  • Platforming puzzles

Link to the Facebook page.

SCREENSHOTS/CONCEPT ART:

This slideshow requires JavaScript.

All art seen above made by Sven Lobnig and Dion Janischka

Building SFGUI into a static library

Recently I had to integrate SFGUI into one of my projects, and I discovered that this particular library did not come with precompiled headers or prebuilt static/dynamic libraries.

This was a new challenge for me, and it took me a bit to wrap my head around it how to get it done. Up until now I had never needed to build a library from source. But the process itself when examined is relatively simple.

I decided that I would build the source into a static library for ease of use. Dynamic linking is nice, but an unnecessary complication for my purpose.

Building a static library requires two things:

  1. Using CMake to generate the project files.
  2. Building the project files themselves in order to obtain the shared library.

CMake should require no introduction, it’s a popular meta-build tool. It operates through the use of Makefiles, the exact specifics of which are still arcane to me, but the process of writing one shall be a subsequent blog post so check this space later!

Building the project files can be done in an IDE like Visual Studio (which is what I used) or a compiler suite like GCC.

Download CMake here:
https://cmake.org/

Download SFGUI source here:
http://sfgui.sfml-dev.de/

It goes without saying that SFGUI had a dependency on SFML, so SFML will need to be integrated into your development environment in order for SFGUI to function, as well as linked to the CMake build process in order for the build to work.

CMakes’ GUI makes things pretty simple:

Capture

The Source and Destination of the build are specified as the same directory here. I preferred this so as to have my newly built library easily found in the same place as the SFGUI source, but this isn’t necessary.

Once these paths are specified, hit ‘Configure’ in order to get CMake to begin analyzing and listing the options for the build.

There will be some errors, but thats okay! We need to set up CMake in order to build the project files, so the errors are just informing you of settings that haven’t been set yet.

Make sure ‘Grouped’ and ‘Advanced’ (The checkboxes in the top right) are ticked so as to access a more organized view of the settings as well as ALL the necessary details.

 

In order to add an entry hit the button with the plus sign on it, to the right of the checkboxes.

The CMake settings group remains mostly unchanged except for the addition of:

CMAKE_MODULE_PATH which is a PATH type variable and which needs to point the location of the makefiles in the SFML root directory of your installation. It will follow this format:

“*SFMLDirectory*\CMake\Modules”

The settings under SFML provide the location of the various libraries and include files of SFML and are similar to the integration of a library into a project using an IDE, so the process should be familiar.

In this situation I’ve left the dynamic library locations unfilled as we are not building a dll (dynamic linked library) for SFGUI.

For a static library of SFGUI, we require the static libraries of SFML. Pretty straightforward.

Obviously, replace the directory of SFML to where ever the root directory of SFML is in your own system.

Capture.JPG

This is what your settings under ‘Ungrouped Entries’, ‘OPENGL’ and ‘SFGUI’ should look like:

Capture

After all this is setup, hit Configure once again to check if there are any errors, if not hit Generate, choose the compiler toolchain you’d prefer to use, (SFGUI requires C++ 11 features so using a version of Visual C++ that is ahead of 2013 seems to be necessary), and the process should be done shortly.

This generates the project files in the SFGUI directory. Open this project in the IDE of your choice and build it. In my build setup there were still some errors, but the process succeeded. I have yet to see if these errors will affect me in some way. Hopefully not.

After this, the library should be found in the directory of SFGUI, under the ‘lib’ subfolder.

Link this into your project environments as you usually would, after specifying the include directories as always.

Hope this helps!

Converting from Decimal To Binary

I was on StackOverflow the other day and saw a question posed about how one might convert from Decimal to Binary, when the initial information is stored in a string. It seemed like a fun little program to take a whack at, so I did. I’ve posted my answer as well as the code solution below:

Image obtained here:

http://pictures.4ever.eu/cartoons/binary-code-161219

The original question and my answer can be found here:

http://stackoverflow.com/questions/34381002/is-there-a-way-to-convert-a-number-represented-as-a-string-to-its-binary-equiv/34381419#34381419

————————————————————-

Okay let’s break down the process you require here. (only one of an infinite number of ways to do this)

1) Conversion of a number represented as a string type into an integer type.

2) Conversion of the intermediary integer type into a binary number which is held in another string type. (judging by the return type of your function, which could just as easily return an integer by the way and save the headache of representing the binary equivalent as a string)

For step 1: Use the standard library function stoi. It does what you might imagine, extracts the numerical data from the string and stores it in an integer.

http://www.cplusplus.com/reference/string/stoi/

std::string numberstr = "123";
int numberint = std::stoi(numberstr);
std::cout << numberint << "\n";

Now you have the number as an integer.

For step 2:

1) This process involves the conversion of a number from base 10 (decimal) to base 2 (binary).

2) Divide the number by 2.

3) Store the remainder and the quotient of this division operation for further use.

4) The remainder becomes part of the binary representation, while the quotient is used as the next dividend.

5) This process repeats until the dividend becomes 1, at which point it too is included in the binary representation.

6) Reverse the string, and voila! You now have the binary representation of a number.

7) If you want to handle negative numbers (which I imagine you might), simply perform a check before the conversion to see if the converted integer is negative, and set a flag to true if it is.

8) Check this flag before reversing, and add a negative sign to end of the string before reversing.

The final function looks like this:

std::string str_to_bin(const std::string& str)
{
std::string binarystr = ""; // Output string

int remainder;
int numberint = std::stoi(str);
bool flagnegative = false;
// If negative number, beginning of binary equivalent is 1
if (numberint < 0)
{
    numberint = abs(numberint);
    flagnegative = true;
}
// If number is 0, don't perform conversion simply return 0
if (numberint == 0)
{
    binarystr = "0";
    return binarystr;
}
std::cout << numberint << "\n";

while (numberint != 1)
{
    remainder = numberint % 2;
    numberint /= 2;
    std::ostringstream convert; // stream used for the conversion
    convert << remainder;      // insert the textual representation of 'remainder' in the characters in the stream
    binarystr += convert.str();
}
std::ostringstream final;
final << numberint;         // To insert the last (or rather first once reversed) binary number
binarystr += final.str();
if (flagnegative == true)
    binarystr += "-";
std::reverse(binarystr.begin(), binarystr.end());
return binarystr;
}