Videos of my gameplay programming work

This was developed during a student project at DigiPen, the game is an exploration-adventure game, and the character is interacting with the Jellyfish in order to solve a puzzle.

I implemented everything seen in this video, including:

  • Scripting in Blueprints for interaction between player input and player animation state machine
  • Scripting for the creature response to the player and movement AI
  • Scripting for the blue energy jump pad
  • All Character/creature/environment art, VFX, animations seen in the video

Part of the same project as the previous video, here the player completes a quest for an NPC by filling a pot with some water, and completes a puzzle by bringing a keystone to the lock-pedestal.

I implemented everything seen in the video including:

  • Scripting for interaction with the chest and objects the player can pick up (pot and the keystone)
  • All items that can be carried inherit from a Parent Blueprint with all the behavior to detect whether or not the player has attempted to pick it up, a Child Blueprint can then define what the object actually does when the player picks it up as well as the static mesh/collider etc.
  • Scripting, animations and animation state machine handling for the pickup and throwing actions
  • Scripting for interaction with NPC and the quest
  • Scripting for the puzzle
  • All art, VFX in the video except for the trail particles the player leaves behind.

An early prototype I made for a character movement ability. The player is creating this ice/crystal path in front of them using ‘mana’ as a resource to build it. The initial idea revolved around this as the short-term gameplay loop.

  • Implemented the scripting for the path creation on player input, it was built for a controller primarily and involved using the analog sticks to direct the path while holding down a button to create it.
  • Used the SplineMeshComponent provided in Unreal to define a spline that is updated dynamically in-game as the player extends the path. The component takes the spline and deforms a mesh along it, which gives the path its smooth and sinuous appearance.
  • The player could fly whenever they were above this path, allowing them to use it like an actual path/shortcut through the world.
  • Also implemented the survival gameplay seen in the HUD bars, which involve an Exposure stat that increased whenever the player was directly exposed to the sun, a Hydration stat that depleted over time and required the player to drink water or otherwise sustain damage to Health.
  • All art and VFX seen are created by me.
  • Did NOT implement the character model or animations though, those are provided by Unreal in the Third Person Template.

 

Advertisement

Fluid Simulation using SPH and OpenCL

Here’s a video of a fluid simulation I made:

This post is going to talk about Fluid Simulation using Smoothed Particle Hydrodynamics(SPH) implemented using OpenCL 1.2.

If you don’t want to read any of this and get right to the code, here it is in “SPH_v1”.

This post is not intended to be a tutorial, but a demonstration of my implementation, though I will include links to the sources I used and hopefully that will prove helpful to someone else.

SPH:

Wikipedia defines SPH as “a computational method used for simulating the dynamics of continuum media”, which is a fancy way of saying that it is an algorithm that can be used to model anything that flows or has fluid-like behavior. (and probably some other stuff too, but that description covers most of it)

The nomenclature was first introduced by two papers, Gingold and Monaghan et al. and Lucy et al. both in 1977.

The paper that you’ll need to read in order to understand it’s applications to fluid simulation in video games/interactive media is the M. Muller, D. Charypar, and M. Gross et al. which can be found here.

For some background in fluid sim. in general, there are generally two approaches through which a fluid medium is described and hence simulated.

  1. Eulerian approach: This treats the fluid as a grid with the resolution of the grid defining how many points into the field are sampled and thus the resultant quality of the fluid simulation. This is simple to implement and algorithms like the Shallow Water Equations make use of it to great effect and run cheap while doing it. The limitation however is with imposing boundary conditions for grid-based solutions and the requirement of a small timestep in order for the simulation to not “explode”.
  2. Lagrangian: This treats the fluid as a collection of discrete particles where each particle has its own mass, position and velocity. The solver performs an all-pairs interaction force calculation, modeling two forces and using the principle of superposition (read : adding them together) in order to arrive at the final force acting on each particle. These forces are the force due to pressure and the force due to viscosity. Surface tension and external forces like gravity can also be included in this force calculation in order to allow for interaction with the fluid system.

The M. Muller paper describes surface tension, but this implementation does not include it. The SPH explanation requires understanding a few things like smoothing kernels and the Navier-Stokes equation, but if you want a source that skips all that and directly describes the code to implement it, here’s another link that I found extremely helpful.

OpenCL:

OpenCL is a compute-layer language that runs on the GPU in order to implement functionality that can benefit from being executed in parallel. OpenCL programs are called “kernels” and each kernel runs on a processor in the GPU, in groups. The hardware specifics are quite complicated but suffice to say that it’s kind of like a shader that isn’t meant to render anything, but rather perform computations that involve a lot of math, which GPUs excel at.

Incidentally, fluid simulations require a lot of math, making them a prime candidate for algorithms that would benefit from parallel execution.

I’ve chosen OpenCL as opposed to the alternative (NVIDIA’s proprietary language CUDA) because I wanted a portable solution that wouldn’t be locked to any single vendor. However that decision also dictated my choice of which version of OpenCL to use (v1.2) as that is the latest version of OpenCL that NVIDIA has provided support for on all their hardware (for reference OpenCL is at v2.2 right now).

The sources I used in order to learn OpenCL are:

  1. https://simpleopencl.blogspot.com/2013/06/tutorial-simple-start-with-opencl-and-c.html
  2. http://enja.org/2010/07/20/adventures-in-opencl-part-1-5-cpp-bindings/
  3. https://github.com/enjalot/EnjaParticles
  4. https://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf

It can be a bit of a headache to get OpenCL to work, but the result is worth it, as the maximum number of particles I could achieve with all calculations on the CPU (with 30 or above FPS) was around 1K, but once I switched all computations to the GPU I was able to max out at 16K particles or so and maintain an appreciable framerate. (On my GitHub page it says 4K but that was with an older PC, right now I am running on an i7 with 16GB of RAM, and a GTX 970 with 3.5GB of VRAM.

Areas for improvement:

  1. My implementation still uses a brute force all-pairs interaction force calculation which is a definite place to be optimized by using spatial partitioning of some sort.
  2. I was looking into extending this into 3D and implementing a grid hashing solution.

 

Ground Fog in Unreal 4

I’ve been working on a third person exploration and adventure game called ‘The Nomad’ for the past 6 months, and I wanted to share some of the techniques I had learned in this time.

My previous post dealt with implementing Distance Fog using a Post-Process material.

This time, we are going to explore how to implement a Ground Fog in Unreal 4.

Ground Fog is very important for a variety of reasons.

Here is the same scene as above, without the Ground Fog.

groundnofog
Scene without Ground Fog

A couple of things you can notice:

  1. The scene still looks okay, but overall lacks any visual complexity.
  2. The color of the sand is now too repetitive and dominates the view.
  3. Less easy to differentiate between the foreground and background, the distance fog that we see helps somewhat though.

So, we can see how Ground Fog can add to the overall aesthetic of a level. Let us now proceed to the implementation itself.

fogtexture
Get UV values for Fog Texture

This set of material nodes, is responsible for raycasting forward a certain distance (ML_Raycast), finding a world position and scaling that by NoiseSize.

raycast
Raycast Material Function

This world position is then fed into MF_NormalMaskedVector and what is obtained then is a UV value for the moving fog texture, by masking the input WorldPosition with the Vertex normal.

normalmaskedvector
Normal Masked Vector Material Function

The output of the moving fog is then multiplied (in my case I use Add, it works for this case but might get weird results otherwise), and then you use a if statement to define a World-Z cutoff for the fog.

If the worldZ of the vertex being drawn is lesser than the cutoff value, we multiply the moving fog color value into the post-process output. You can think of this as a simple if-conditional check.

if(PixelWorldZ < CutoffValue)
DisplayFog();

Then, in order to make the fog fade smoothly until the cutoff value is reached, we use another if statement to check the distance between the cutoff Z value and the current pixel world Z value. If the pixel is within the gradient fade range (as defined by GradientRange), then we lerp between the color output of the fog and 1.

if(WorldZCutoff – PixelWorldZ < GradientRange)
LerpBetweenFogColorAndSceneColor();

If the output is 1, we use only the scene color.

groundfogmaterial
Gradient and final material output

The final output of all this is multiplied into the PostProcessInput, and then fed into the Emissive Color.

This material uses the Post-Process material domain. Assign it to a Post-Process Volume, and you should be good to go.

Hope this is helpful to someone!

 

OSVR HDK Unboxing

So I finally got my hands on the OSVR headset from Razer. I’m not going to lie, it took its sweet time getting here (about 2 months more than the initially projected date) but the sheer degree of awesome that is possessed by a VR headset helped negate all my irritation from the delay.

In other words, I was squealing like a child when I picked it up and began the unboxing. I hope my excitement translates to you, gentle reader!

IMG_20151015_143123436_HDRIMG_20151015_143134137IMG_20151015_143355573

About now is when the squealing reached supersonic levels.IMG_20151015_144431475IMG_20151015_144443954IMG_20151015_143516892

The HDK itself seems to be divided into:

  1. The headset
  2. HDMI cables for the headset
  3. Some sort of audio cables that I haven’t yet figured out how to use
  4. The power supply
  5. The positional tracking kit (i.e. the Camera, a tripod stand and relevant cabling)
  6. A cleaning brush
  7. A hub for all the cables to connect into

IMG_20151015_143835863IMG_20151015_144123339

And there you have it!
The OSVR HDK unboxed.IMG_20151015_144048049

The link below is the official GitHub repository of the OSVR project, and contains all the instructions I needed to get everything setup and test the headset out in an actual VR demo.

Suffice to say, I was blown away.

https://github.com/OSVR

AND

https://github.com/OSVR/OSVR-General/wiki/HDK-Unboxing-and-Getting-Started

I hope this was of help/interest to anyone looking for more info on the OSVR project.

Feel free to contact me at Nightmask3@gmail.com for any help or insight I can provide on being a part of this ambitious new venture!

Thanks!

Happy hacking.