Interactive Material and Element system from Breath of the Wild

One of the games that came out recently that surprised me in terms of clever design and depth of its gameplay systems was Legend of Zelda: Breath of the Wild.

tumblr_o8ui2sXPJG1vxh6q9o1_500.gif

After watching the GDC 2017 talk given by the three lead developers of BotW, in particular the section given by the lead programmer:

This talk blew my mind when I first saw it.

The simplicity of the system, and the potential that it possessed, made this method seem like something every game going forward that wants to have an fully interactive world should have, in my opinion.

I started thinking about how you might implement something like this in Unreal, and moreover how you might implement it in a way that allows for ease of use and scaling as you added more materials and elements.

This also coincided with the Unreal Summer Game Jam, and I decided to take the opportunity to create a small game/proof of concept for this system:

This is what I came up with:

ezgif.com-video-to-gif (1).gif
The rules of this system are stated quite simply and don’t leave much room for error:

1. Materials can interact with Elements
2. Elements can interact with other Elements
3. Materials cannot interact with other Materials

The system as I implemented it in Unreal has a few basic pieces:

1) Material components – Registered as a C++ base class, this is subclassed into various different Material types like WoodMaterial, MetalMaterial etc in Blueprints.

Material
2) Element components – Registered as a C++ base class, this is subclassed into various different Element types like FireElement, WaterElement etc in Blueprints.

Element

3) Material/Element states – Every component has a state enumeration variable, which by default is “Active” and represents that material/element after being exposed to some element, eg. “Burning”, “Wet”, “Electrocuted” etc. (these are Material state examples, I haven’t quite found a use for Element states yet beyond “Active” and “Disabled” but I’m sure they’ll come in handy at some point)

4) Material and Element types – Another enumeration variable used to identify various types of elements and materials. This variable is chosen at design time.

StateTypeClass
5) Material Class/Element Class – Another variable that is chosen at design time, this is used to know what subclass of material/element to actually use/spawn. This is also why there is a ‘CreatedMaterial’ or ‘CreatedElement’ variable, as the component in the details panel is only a base Material/Element type, and the actual component that you need is created at runtime and stored in ‘CreatedMaterial’ or ‘CreatedElement’ respectively.

I’m fairly certain there’s better ways to do this that aren’t as hacky, but it worked for me.

It’s also a bit unintuitive to have to select both a ‘Wood’ type and a ‘WoodMaterial’ material class, you’d usually expect to do one or the other, but not both, however those values are needed by the system for proper functioning, in different places.

It’s probably something you can abstract away with some boilerplate if checks, I just didn’t find it necessary to implement for a prototype.

An unexpected benefit of doing it in this somewhat unintuitive way is that you’d never need to change your Material or Element component classes when you add new types of Materials/Elements. If you were to do if-checks based on the ‘Type’ being selected, say, then you’d need to add a new if-check for every type of material/element you add, which can quickly get out of hand. The same would apply vice-versa if you were using the Class instead.

This way introduces greater chance of user error, having one of those two properties not selected will cause silent errors that are hard to find, but if that’s a tradeoff you can live with, more power to you.

6) Blueprint subclasses of Material/Element – Having the actual Material/Element behaviors be defined in BPs, allows for designers to utilize some handy flow control systems that exist in BP and allow for intuitive and scalable scripting of the interaction between different Materials and Elements.

A picture is worth a thousand words, so here’s what the BP of the wood Material component looks like:

BP

What’s happening in the gif is that the Player is tagged as a wood Material, and walks into a volume tagged as a fire Element.

This causes them to catch on fire (as wood in fire typically does),  and when that’s done it sets the player to a ‘burning/on fire’ state.

123.PNG

This allows for different objects to own Material or Element components, have a type (Material example would be wood, Element example would be fire), and then allow for designers to implement different types of response behavior for when an Element and Material interact, in Blueprints.

Having state values associated with both Materials and Elements, also means you can script additional or altogether different interactions based on the state of the Material/Element in question, for eg:

A wood material in its normal active state on contact with water does nothing, but a wood material in a ‘burning’ state on contact with water, has the fire removed.

d1cd541d82a24cfe97726178b31f57eb.gif

It also uses the neat visual flow control of the ‘Switch-On-Enum’ nodes to keep things simple and compartmentalized for designers.

How the actual functions of the Material/Element components are triggered is done using the BeginOverlap events from colliders, which looks something like this:

1234.PNG

Something I did for ease of use was to create a BP class which has a collider, Static/Skeletal mesh/Particle system and Material/Element component, and then designers can easily create new interactive objects by inheriting from this BP and switching around the mesh/particle system and selecting different Material/Element types.

This works okay for demo purposes but in a production environment you’d probably make those as C++ base classes too, as there’s nothing happening in the interactive item base classes that you couldn’t easily transcribe to code.

So essentially, any class that wants to use the Material/Element components, just needs to have a collider, and hook up the BeginOverlap events of that colllider to the Material/Element ‘OnStateChange’ events and provide the appropriate arguments from the object that caused the overlap.

Here’s what that looks like for a typical interactive object:

1235.PNG
This is an Element item, which is why it checks for both Element and Material state changes from the other object. A Material item would only check for Element state changes. (remember the three rules!)

 

A final tidbit, though it doesn’t have anything to do with the interactive Material/Element system persay, is for how to get particles to spawn all over a skeletal mesh, which is an invaluable part of the Cascade particle system and is necessary to sell the effect of things interacting in my opinion.

fire
The circled out Cascade emitter is where all the magic is, the details panel in the lower left is where you choose the parameter name that will point to the target actor of this particle effect.

When you want to actually create the effect, it’s just a matter of spawning the appropriate particle system, and setting that parameter using the name you assigned within the particle system earlier, to the actor that you want to be covered in fire/water/etc.

12356.PNG

That’s it for this post, hope that this provided some insight into how to make objects feel more interactive and get more of that lovely ‘game juice’ feel into your games that Nintendo manages to do time and again.

Hope this is helpful!

I’m on Twitter and you can reach me @nightmask3 if you need any help or have a clarification.

Advertisement

Fluid Simulation using SPH and OpenCL

Here’s a video of a fluid simulation I made:

This post is going to talk about Fluid Simulation using Smoothed Particle Hydrodynamics(SPH) implemented using OpenCL 1.2.

If you don’t want to read any of this and get right to the code, here it is in “SPH_v1”.

This post is not intended to be a tutorial, but a demonstration of my implementation, though I will include links to the sources I used and hopefully that will prove helpful to someone else.

SPH:

Wikipedia defines SPH as “a computational method used for simulating the dynamics of continuum media”, which is a fancy way of saying that it is an algorithm that can be used to model anything that flows or has fluid-like behavior. (and probably some other stuff too, but that description covers most of it)

The nomenclature was first introduced by two papers, Gingold and Monaghan et al. and Lucy et al. both in 1977.

The paper that you’ll need to read in order to understand it’s applications to fluid simulation in video games/interactive media is the M. Muller, D. Charypar, and M. Gross et al. which can be found here.

For some background in fluid sim. in general, there are generally two approaches through which a fluid medium is described and hence simulated.

  1. Eulerian approach: This treats the fluid as a grid with the resolution of the grid defining how many points into the field are sampled and thus the resultant quality of the fluid simulation. This is simple to implement and algorithms like the Shallow Water Equations make use of it to great effect and run cheap while doing it. The limitation however is with imposing boundary conditions for grid-based solutions and the requirement of a small timestep in order for the simulation to not “explode”.
  2. Lagrangian: This treats the fluid as a collection of discrete particles where each particle has its own mass, position and velocity. The solver performs an all-pairs interaction force calculation, modeling two forces and using the principle of superposition (read : adding them together) in order to arrive at the final force acting on each particle. These forces are the force due to pressure and the force due to viscosity. Surface tension and external forces like gravity can also be included in this force calculation in order to allow for interaction with the fluid system.

The M. Muller paper describes surface tension, but this implementation does not include it. The SPH explanation requires understanding a few things like smoothing kernels and the Navier-Stokes equation, but if you want a source that skips all that and directly describes the code to implement it, here’s another link that I found extremely helpful.

OpenCL:

OpenCL is a compute-layer language that runs on the GPU in order to implement functionality that can benefit from being executed in parallel. OpenCL programs are called “kernels” and each kernel runs on a processor in the GPU, in groups. The hardware specifics are quite complicated but suffice to say that it’s kind of like a shader that isn’t meant to render anything, but rather perform computations that involve a lot of math, which GPUs excel at.

Incidentally, fluid simulations require a lot of math, making them a prime candidate for algorithms that would benefit from parallel execution.

I’ve chosen OpenCL as opposed to the alternative (NVIDIA’s proprietary language CUDA) because I wanted a portable solution that wouldn’t be locked to any single vendor. However that decision also dictated my choice of which version of OpenCL to use (v1.2) as that is the latest version of OpenCL that NVIDIA has provided support for on all their hardware (for reference OpenCL is at v2.2 right now).

The sources I used in order to learn OpenCL are:

  1. https://simpleopencl.blogspot.com/2013/06/tutorial-simple-start-with-opencl-and-c.html
  2. http://enja.org/2010/07/20/adventures-in-opencl-part-1-5-cpp-bindings/
  3. https://github.com/enjalot/EnjaParticles
  4. https://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf

It can be a bit of a headache to get OpenCL to work, but the result is worth it, as the maximum number of particles I could achieve with all calculations on the CPU (with 30 or above FPS) was around 1K, but once I switched all computations to the GPU I was able to max out at 16K particles or so and maintain an appreciable framerate. (On my GitHub page it says 4K but that was with an older PC, right now I am running on an i7 with 16GB of RAM, and a GTX 970 with 3.5GB of VRAM.

Areas for improvement:

  1. My implementation still uses a brute force all-pairs interaction force calculation which is a definite place to be optimized by using spatial partitioning of some sort.
  2. I was looking into extending this into 3D and implementing a grid hashing solution.

 

Unlit

ABOUT:

I worked on a team of 4 programmers (including myself) to build a 2.5D Platformer called ‘Unlit’ in a time period of 4 months. The engine we built is called the Whisky Engine and is written using C++ with a few libraries like OpenGL, GLM, SFML etc. The engine is built on a component-based design.

MY ROLE:

  1. Physics programming:
    a. Implemented the physics engine.
    b. Used SAT to implement AABB colliders
    c. Also implemented were colliders for spheres and planes, and raycasting.
  2. Gameplay programming:
    a. Implemented, tested and fine tuned player input.
    b. Unlit is a platformer, so most of the gameplay revolved around input and how the player can traverse the world.
  3. Level design:
    a. Implemented 3 out of the 4 levels that we had in the final release of the game. Did so using our custom built level editor.
  4. Environment Artist:
    a. Modeled and textured all assets except for the main character. Used Blender, 3DS Max and Photoshop.

CREDITS:

1) Egemen Koku: Tools Programmer/Engine Programmer
2) Lisa Sturm: Producer/UI Programmer/Gameplay Programmer
3) Sai Narayan: Physics Programmer/Level Designer/Gameplay Programmer
4) Volkan Ilbeyli: Graphics Programmer/Engine Programmer

SCREENSHOTS:

This slideshow requires JavaScript.

Converting from Decimal To Binary

I was on StackOverflow the other day and saw a question posed about how one might convert from Decimal to Binary, when the initial information is stored in a string. It seemed like a fun little program to take a whack at, so I did. I’ve posted my answer as well as the code solution below:

Image obtained here:

http://pictures.4ever.eu/cartoons/binary-code-161219

The original question and my answer can be found here:

http://stackoverflow.com/questions/34381002/is-there-a-way-to-convert-a-number-represented-as-a-string-to-its-binary-equiv/34381419#34381419

————————————————————-

Okay let’s break down the process you require here. (only one of an infinite number of ways to do this)

1) Conversion of a number represented as a string type into an integer type.

2) Conversion of the intermediary integer type into a binary number which is held in another string type. (judging by the return type of your function, which could just as easily return an integer by the way and save the headache of representing the binary equivalent as a string)

For step 1: Use the standard library function stoi. It does what you might imagine, extracts the numerical data from the string and stores it in an integer.

http://www.cplusplus.com/reference/string/stoi/

std::string numberstr = "123";
int numberint = std::stoi(numberstr);
std::cout << numberint << "\n";

Now you have the number as an integer.

For step 2:

1) This process involves the conversion of a number from base 10 (decimal) to base 2 (binary).

2) Divide the number by 2.

3) Store the remainder and the quotient of this division operation for further use.

4) The remainder becomes part of the binary representation, while the quotient is used as the next dividend.

5) This process repeats until the dividend becomes 1, at which point it too is included in the binary representation.

6) Reverse the string, and voila! You now have the binary representation of a number.

7) If you want to handle negative numbers (which I imagine you might), simply perform a check before the conversion to see if the converted integer is negative, and set a flag to true if it is.

8) Check this flag before reversing, and add a negative sign to end of the string before reversing.

The final function looks like this:

std::string str_to_bin(const std::string& str)
{
std::string binarystr = ""; // Output string

int remainder;
int numberint = std::stoi(str);
bool flagnegative = false;
// If negative number, beginning of binary equivalent is 1
if (numberint < 0)
{
    numberint = abs(numberint);
    flagnegative = true;
}
// If number is 0, don't perform conversion simply return 0
if (numberint == 0)
{
    binarystr = "0";
    return binarystr;
}
std::cout << numberint << "\n";

while (numberint != 1)
{
    remainder = numberint % 2;
    numberint /= 2;
    std::ostringstream convert; // stream used for the conversion
    convert << remainder;      // insert the textual representation of 'remainder' in the characters in the stream
    binarystr += convert.str();
}
std::ostringstream final;
final << numberint;         // To insert the last (or rather first once reversed) binary number
binarystr += final.str();
if (flagnegative == true)
    binarystr += "-";
std::reverse(binarystr.begin(), binarystr.end());
return binarystr;
}

Procedural Terrain Generation

Hello there!

It’s been a good long while since I’ve last posted, but I’ve been incredibly busy with college. Have my finals going on right now, almost done with my bachelor degree in computer science.

I’m going to start by just putting this incredibly pretty terrain here:

Output

If you’re willing to read it to the end, this post will teach you how to create this sort of terrain using C++.

I’m aware that there are programs that can achieve the same result available for free such as L3DT (which is actually used in this project), which are much better than this humble program and much more convenient to use, but I’d like to imagine that this simplifies the process of procedural terrain generation a little bit by reducing the complexity required to understand how to operate L3DT. If anyone is a beginner like me to the topic of procedural generation, perhaps this will be of help to them as well.

This is a simple console program that allows the user to create heightmaps. A heightmap is a file that can store surface elevation values, which is a complicated way of saying it stores height information. This height information can be used to model terrain in 3D game engines and some 3D modelling software.

What I’ve created was actually done for my final year project. And to give credit where it’s due it’s mainly based off a paper written by Jonathan Roberts and Ke Chen, Senior Members, IEEE

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6853332

The above is a link to the abstract and IEEE page of the paper

My project can be summarized simply as:

An attempt to implement the Perlin noise algorithm and machine learning concepts and use them to develop a tool that allows even a layman to generate procedural terrain that can be exported as a Photoshop RAW file and used as a heightmap in any game engine such as Unreal or Unity.

To break that down:

1) Perlin Noise: A nifty algorithm created by Ken Perlin, this allows for the creation of noise that has a natural, organic and non repetitive appearance, which gives it a resemblance to a variety of natural objects such as clouds, fire, and terrain. This means Perlin noise textures can be used as heightmaps. I have used the excellent open source library Libnoise to implement Perlin noise.

https://mrl.nyu.edu/~perlin/ : Website of Ken Perlin.

2) Machine Learning: A method used to impart pseudo-artificial intelligence to the program by allowing it to take in input (feedback of user) which alters the output. Every iteration of feedback input improves the results of the previous stage. It does this by identifying a favorable content category and constraining future output to that category.

3) Exporting the heightmap as a Photoshop RAW file: Using the amazing tool L3DT(Large 3D Terrain Generator) to view the output of the terrain generation, user can decide if it matches their needs. User is presented with option to finalize output, modify it some way (more mountainous, flatter, hillier), or accept the output but request a refinement. When they choose to finalize, the heightmap is converted to Photoshop RAW format (again using a script in L3DT, all credit to Aaron Bundy). This RAW file can be imported into Unreal, Unity i.e. level design softwares.

Apart from the above, I’ve also used the SFML libraries to create the GUI that displays the noise texture for user approval.

So in order to make this work, you’d have to integrate the libnoise and SFML libraries into your project environment to successfully compile the source code given below.

Feel free to clone the repository I made on GitHub using the following link:

https://github.com/Nightmask3/Perlin-Heightmap-Builder

Texture

Above are pictures of a Perlin noise texture generated by the program.

Once the heightmap has reached the satisfaction of the user, it can be exported in Photoshop RAW format, upon which it can then be imported into game engines like Unreal and Unity to be used to create terrain.

This is a student made project and it’s bound to have some bugs here and there, but I believe it is a good first foray into procedural generation, which is a topic I shall definitely be pursuing in the future.