3D Physics Engine using GJK, EPA and Sequential Impulse Solver

ac30ccbfcbacf1cabaf88eb85cf815af.gif
GitHub repo can be found here: https://github.com/Nightmask3/Physics-Framework

I had to make a physics engine for one of my classes at DigiPen (CS550 – Physics Simulations).

The requirements of this physics engine were:

1) Have a broadphase collision detection implementation
2) Have a method of resolving collision
3) Must use three dimensional rigid body physics

Overview

The engine uses an Entity-Component system that communicates using an Observer pattern.

A good primer on the observer pattern can be found here: http://gameprogrammingpatterns.com/observer.html

Everything in the engine that wishes to listen to events, must inherit from the interface/abstract base class Observer.

BaseHierarchy

A code snippet of the Observer:ObserverEverything that wants to send events must own a Subject for that event:
Subject.PNG
Subjects can have lists of Observers who subscribe and listen for events to fire in the future, as well as send one-time “fire and forget” events to Objects as well.
This allows for event-driven communication between Observers and Objects (which most things in the engine are).

All the managers in the engine derive from Observer, allowing for them to listen to events from GameObjects, the Engine, components etc.

ManagersThe component hierarchy is fairly typical:

Component.PNGIntegration:

By default, the engine uses a semi-implicit Euler integration method. RK-4 was implemented, but never required to be used. Same goes for a Verlet integrator that was also implemented.

There are some caveats to integration, such as where colliders that are marked as ‘Static’ are not integrated. This is an issue of poor component design where I require objects that don’t really need physics (namely static objects) to still own one as they contain some data that is necessary for collision detection.

An improvement to this would be to decouple the Collider and Physics components dependencies so that a Collider could function without a Physics component also belonging to the owner.

dc8b1c2d163317c581e82101c3e560bf.gif
Collision Detection:

The Gilbert-Johnson-Keerthi (GJK) algorithm is used to provide a Boolean check of whether the two convex shapes that are being tested are intersecting. The strength of GJK is that it can be manipulated completely based on the type of support function (or) support map being used.

Originally, I was using the naïve method of looping through all vertices and getting the dot product of each vertex against the search direction and finding the vertex furthest along that direction. However, for a shape with the spatial symmetry about all three axes (i.e. a cube or a sphere) it is possible to use another method involving an ‘Extents’ approach, which just multiplies the sign of the search direction against the half size of the cube (or the radius of the sphere). In this engine I have only implemented cubes, so this method proved sufficient.

3bf5069dc45722bc50635596d1f7fa04.gif
EPA:

If/when GJK finds a simplex that encloses the origin, it passes the simplex onto EPA which attempts to locate the closest point possible to the origin in the Minkowski space, which should give the contact point, in theory.

In practice, due to the limitations of 32-bit floating point precision and math, EPA is very prone to numerical error, and as Dirk Gregorious as stated on various forums, is a method he does not personally like. I began to see why when I encountered some situations whose blame falls squarely on the imprecision of EPA and its inability to return a consistent contact point or one that can be guaranteed to be close to the center of mass.

It’s my belief that persistent manifolds in fact are a way to make up for the imprecision introduced by EPA into the resolution process, though also the issue of error in impulse solvers exacerbates this.

When EPA returns a point that is within a tolerable limit of the origin, the contact point is extrapolated using barycentric projection. There is an iteration limit of 50 to prevent the EPA from cycling infinitely.

Collision Resolution:

a34d3109ce992adf4334798e88107465.gif

The contact that is generated by EPA is registered using a contact/collision constraint.

This constraint is solved using a method given by Erin Catto in his paper “Iterative Dynamics using Temporal Coherence”. It involves a sequential impulse method that comes down to a linear complementarity problem that must be solved using a numerical solver of some type.

The type of solver used in the paper and here is a projected Gauss-Siedel solver, which is shown to converge faster for this use-case than the Jacobi-Hamilton solver.

The solver performs velocity correction to ensure the velocity-level constraint is satisfied and objects will no longer move into each other, and the penetration is resolved using a Baumgarte stabilization method, which massages the constraint forces to get them to do “virtual work”.

There is also a slop term that is provided on the Baumgarte stabilization to allow objects to penetrate a bit before we apply the Baumgarte term.

Issues:

Lack of persistent manifolds and numerical issues with GJK and EPA mean that there are still some penetrations that can happen in some situations.

Additionally the issue of jitter can cause penetrations to happen. I plan on continuing with the implementation of this engine at least until I can obtain stable stacking, just to teach myself as well as to have that on my portfolio.

Conclusion:

I am pretty satisfied with the final state of my engine. It’s far from perfect, but considering that last year I spent upwards of three months trying to implement OBBs using SAT and I failed utterly, the fact that I managed to implement a working engine with detection and response is honestly enough for me.

Advertisements

Videos of my gameplay programming work

This was developed during a student project at DigiPen, the game is an exploration-adventure game, and the character is interacting with the Jellyfish in order to solve a puzzle.

I implemented everything seen in this video, including:

  • Scripting in Blueprints for interaction between player input and player animation state machine
  • Scripting for the creature response to the player and movement AI
  • Scripting for the blue energy jump pad
  • All Character/creature/environment art, VFX, animations seen in the video

Part of the same project as the previous video, here the player completes a quest for an NPC by filling a pot with some water, and completes a puzzle by bringing a keystone to the lock-pedestal.

I implemented everything seen in the video including:

  • Scripting for interaction with the chest and objects the player can pick up (pot and the keystone)
  • All items that can be carried inherit from a Parent Blueprint with all the behavior to detect whether or not the player has attempted to pick it up, a Child Blueprint can then define what the object actually does when the player picks it up as well as the static mesh/collider etc.
  • Scripting, animations and animation state machine handling for the pickup and throwing actions
  • Scripting for interaction with NPC and the quest
  • Scripting for the puzzle
  • All art, VFX in the video except for the trail particles the player leaves behind.

An early prototype I made for a character movement ability. The player is creating this ice/crystal path in front of them using ‘mana’ as a resource to build it. The initial idea revolved around this as the short-term gameplay loop.

  • Implemented the scripting for the path creation on player input, it was built for a controller primarily and involved using the analog sticks to direct the path while holding down a button to create it.
  • Used the SplineMeshComponent provided in Unreal to define a spline that is updated dynamically in-game as the player extends the path. The component takes the spline and deforms a mesh along it, which gives the path its smooth and sinuous appearance.
  • The player could fly whenever they were above this path, allowing them to use it like an actual path/shortcut through the world.
  • Also implemented the survival gameplay seen in the HUD bars, which involve an Exposure stat that increased whenever the player was directly exposed to the sun, a Hydration stat that depleted over time and required the player to drink water or otherwise sustain damage to Health.
  • All art and VFX seen are created by me.
  • Did NOT implement the character model or animations though, those are provided by Unreal in the Third Person Template.

 

Fluid Simulation using SPH and OpenCL

Here’s a video of a fluid simulation I made:

This post is going to talk about Fluid Simulation using Smoothed Particle Hydrodynamics(SPH) implemented using OpenCL 1.2.

If you don’t want to read any of this and get right to the code, here it is in “SPH_v1”.

This post is not intended to be a tutorial, but a demonstration of my implementation, though I will include links to the sources I used and hopefully that will prove helpful to someone else.

SPH:

Wikipedia defines SPH as “a computational method used for simulating the dynamics of continuum media”, which is a fancy way of saying that it is an algorithm that can be used to model anything that flows or has fluid-like behavior. (and probably some other stuff too, but that description covers most of it)

The nomenclature was first introduced by two papers, Gingold and Monaghan et al. and Lucy et al. both in 1977.

The paper that you’ll need to read in order to understand it’s applications to fluid simulation in video games/interactive media is the M. Muller, D. Charypar, and M. Gross et al. which can be found here.

For some background in fluid sim. in general, there are generally two approaches through which a fluid medium is described and hence simulated.

  1. Eulerian approach: This treats the fluid as a grid with the resolution of the grid defining how many points into the field are sampled and thus the resultant quality of the fluid simulation. This is simple to implement and algorithms like the Shallow Water Equations make use of it to great effect and run cheap while doing it. The limitation however is with imposing boundary conditions for grid-based solutions and the requirement of a small timestep in order for the simulation to not “explode”.
  2. Lagrangian: This treats the fluid as a collection of discrete particles where each particle has its own mass, position and velocity. The solver performs an all-pairs interaction force calculation, modeling two forces and using the principle of superposition (read : adding them together) in order to arrive at the final force acting on each particle. These forces are the force due to pressure and the force due to viscosity. Surface tension and external forces like gravity can also be included in this force calculation in order to allow for interaction with the fluid system.

The M. Muller paper describes surface tension, but this implementation does not include it. The SPH explanation requires understanding a few things like smoothing kernels and the Navier-Stokes equation, but if you want a source that skips all that and directly describes the code to implement it, here’s another link that I found extremely helpful.

OpenCL:

OpenCL is a compute-layer language that runs on the GPU in order to implement functionality that can benefit from being executed in parallel. OpenCL programs are called “kernels” and each kernel runs on a processor in the GPU, in groups. The hardware specifics are quite complicated but suffice to say that it’s kind of like a shader that isn’t meant to render anything, but rather perform computations that involve a lot of math, which GPUs excel at.

Incidentally, fluid simulations require a lot of math, making them a prime candidate for algorithms that would benefit from parallel execution.

I’ve chosen OpenCL as opposed to the alternative (NVIDIA’s proprietary language CUDA) because I wanted a portable solution that wouldn’t be locked to any single vendor. However that decision also dictated my choice of which version of OpenCL to use (v1.2) as that is the latest version of OpenCL that NVIDIA has provided support for on all their hardware (for reference OpenCL is at v2.2 right now).

The sources I used in order to learn OpenCL are:

  1. https://simpleopencl.blogspot.com/2013/06/tutorial-simple-start-with-opencl-and-c.html
  2. http://enja.org/2010/07/20/adventures-in-opencl-part-1-5-cpp-bindings/
  3. https://github.com/enjalot/EnjaParticles
  4. https://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf

It can be a bit of a headache to get OpenCL to work, but the result is worth it, as the maximum number of particles I could achieve with all calculations on the CPU (with 30 or above FPS) was around 1K, but once I switched all computations to the GPU I was able to max out at 16K particles or so and maintain an appreciable framerate. (On my GitHub page it says 4K but that was with an older PC, right now I am running on an i7 with 16GB of RAM, and a GTX 970 with 3.5GB of VRAM.

Areas for improvement:

  1. My implementation still uses a brute force all-pairs interaction force calculation which is a definite place to be optimized by using spatial partitioning of some sort.
  2. I was looking into extending this into 3D and implementing a grid hashing solution.

 

Unlit

ABOUT:

I worked on a team of 4 programmers (including myself) to build a 2.5D Platformer called ‘Unlit’ in a time period of 4 months. The engine we built is called the Whisky Engine and is written using C++ with a few libraries like OpenGL, GLM, SFML etc. The engine is built on a component-based design.

MY ROLE:

  1. Physics programming:
    a. Implemented the physics engine.
    b. Used SAT to implement AABB colliders
    c. Also implemented were colliders for spheres and planes, and raycasting.
  2. Gameplay programming:
    a. Implemented, tested and fine tuned player input.
    b. Unlit is a platformer, so most of the gameplay revolved around input and how the player can traverse the world.
  3. Level design:
    a. Implemented 3 out of the 4 levels that we had in the final release of the game. Did so using our custom built level editor.
  4. Environment Artist:
    a. Modeled and textured all assets except for the main character. Used Blender, 3DS Max and Photoshop.

CREDITS:

1) Egemen Koku: Tools Programmer/Engine Programmer
2) Lisa Sturm: Producer/UI Programmer/Gameplay Programmer
3) Sai Narayan: Physics Programmer/Level Designer/Gameplay Programmer
4) Volkan Ilbeyli: Graphics Programmer/Engine Programmer

SCREENSHOTS:

This slideshow requires JavaScript.

A* algorithm demonstration

Embedded above is a video demonstrating the features of pathfinding and terrain analysis I implemented. These include:

1) A * pathfinding
2) Djikstras algorithm pathfinding
3) Catmull-Rom splines for path smoothing
3) Pathfinding with different heuristics (Euclidean, Chebyshev, Octile, Manhattan)
4) Terrain Analysis (Cover, Visibility, Visible to Player)
5) Pathfinding with Fog of War
6) Pathfinding with influence map of terrain analysis

The framework for control of the character (State Machine Language) is owned by Steve Rabin, the character in the video (Tiny) and all other assets belong to Microsoft.

Additionally the project itself is property of the Digipen® corporation.
DigiPen® is a registered trademark of DigiPen (USA) Corp.
Copyright © 2016 DigiPen (USA) Corp. and its owners. All Rights Reserved.

Cover image for this post taken from here:
http://mnemstudio.org/ai/path/images/a-star1b.gif

 

Building SFGUI into a static library

Recently I had to integrate SFGUI into one of my projects, and I discovered that this particular library did not come with precompiled headers or prebuilt static/dynamic libraries.

This was a new challenge for me, and it took me a bit to wrap my head around it how to get it done. Up until now I had never needed to build a library from source. But the process itself when examined is relatively simple.

I decided that I would build the source into a static library for ease of use. Dynamic linking is nice, but an unnecessary complication for my purpose.

Building a static library requires two things:

  1. Using CMake to generate the project files.
  2. Building the project files themselves in order to obtain the shared library.

CMake should require no introduction, it’s a popular meta-build tool. It operates through the use of Makefiles, the exact specifics of which are still arcane to me, but the process of writing one shall be a subsequent blog post so check this space later!

Building the project files can be done in an IDE like Visual Studio (which is what I used) or a compiler suite like GCC.

Download CMake here:
https://cmake.org/

Download SFGUI source here:
http://sfgui.sfml-dev.de/

It goes without saying that SFGUI had a dependency on SFML, so SFML will need to be integrated into your development environment in order for SFGUI to function, as well as linked to the CMake build process in order for the build to work.

CMakes’ GUI makes things pretty simple:

Capture

The Source and Destination of the build are specified as the same directory here. I preferred this so as to have my newly built library easily found in the same place as the SFGUI source, but this isn’t necessary.

Once these paths are specified, hit ‘Configure’ in order to get CMake to begin analyzing and listing the options for the build.

There will be some errors, but thats okay! We need to set up CMake in order to build the project files, so the errors are just informing you of settings that haven’t been set yet.

Make sure ‘Grouped’ and ‘Advanced’ (The checkboxes in the top right) are ticked so as to access a more organized view of the settings as well as ALL the necessary details.

 

In order to add an entry hit the button with the plus sign on it, to the right of the checkboxes.

The CMake settings group remains mostly unchanged except for the addition of:

CMAKE_MODULE_PATH which is a PATH type variable and which needs to point the location of the makefiles in the SFML root directory of your installation. It will follow this format:

“*SFMLDirectory*\CMake\Modules”

The settings under SFML provide the location of the various libraries and include files of SFML and are similar to the integration of a library into a project using an IDE, so the process should be familiar.

In this situation I’ve left the dynamic library locations unfilled as we are not building a dll (dynamic linked library) for SFGUI.

For a static library of SFGUI, we require the static libraries of SFML. Pretty straightforward.

Obviously, replace the directory of SFML to where ever the root directory of SFML is in your own system.

Capture.JPG

This is what your settings under ‘Ungrouped Entries’, ‘OPENGL’ and ‘SFGUI’ should look like:

Capture

After all this is setup, hit Configure once again to check if there are any errors, if not hit Generate, choose the compiler toolchain you’d prefer to use, (SFGUI requires C++ 11 features so using a version of Visual C++ that is ahead of 2013 seems to be necessary), and the process should be done shortly.

This generates the project files in the SFGUI directory. Open this project in the IDE of your choice and build it. In my build setup there were still some errors, but the process succeeded. I have yet to see if these errors will affect me in some way. Hopefully not.

After this, the library should be found in the directory of SFGUI, under the ‘lib’ subfolder.

Link this into your project environments as you usually would, after specifying the include directories as always.

Hope this helps!

Dynamic Mesh Spawning in UE4

In the eternal struggle to accumulate reputation on StackOverflow, I noticed a question that seemed easily solvable (by the standards of SO at least).

The link to the question:

http://stackoverflow.com/questions/29628032/how-do-you-dynamically-render-quads-in-ue4

The question asker was requesting for a concrete example of how to create a static mesh/ quad (as he puts it) at runtime.
It is also required that this quad be rendered with a texture on it.

Textures in Unreal 4 are implemented through the use of Materials (which allow for the composition of multiple textures into a single texture when applied to an object, and are a really handy tool)

When I first saw this question, it seemed quite cut and dry. Unreal Blueprints make it simple to achieve a lot of tasks without resorting to actual coding, and I made a suggestion towards the same.

However the asker said that he had yet to see a single actual example in code that allowed one to do this either in the documentation or on the Unreal 4 AnswerHub and after some Googling I found that this was indeed true.

This pretty much hooked me right in. I had to figure it out, and I felt bad for giving the asker the same vague high-level answer that he had found everywhere else. I resolved to have something in the way of an actual code example to help him on his way.

2 hours later, this was the final result:

A quad created an runtime with a texture on it.

M5VGt

I set it up so that an instance of my class ‘DynamicMeshSpawner’ would be spawned everytime I hit ‘P’ on the keyboard. When the instance of this class is created, it calls the constructor, which spawns the cube with the material applied. I did the class instance spawning stuff in Blueprints using the SpawnActor node.

Capture

ADynamicMeshSpawner::ADynamicMeshSpawner()
{
     // Set this actor to call Tick() every frame.  You can turn this off to improve performance if you don't need it.
     PrimaryActorTick.bCanEverTick = true;

    // Using a SphereComponent is not particularly necessary or relevant, but the cube refused to spawn without a root component to attach to, or so I surmise. Yay Unreal. =/
    USphereComponent* CubeComponent = CreateDefaultSubobject<USphereComponent>(TEXT("RootComponent"));
    RootComponent = CubeComponent;
    CubeComponent->InitSphereRadius(40.0f);
    CubeComponent->SetCollisionProfileName(TEXT("Pawn"));

    // Create and position a mesh component so we can see where our cube is
    UStaticMeshComponent* CubeVisual = CreateDefaultSubobject<UStaticMeshComponent>(TEXT("VisualRepresentation"));
    CubeVisual->AttachTo(RootComponent);
    static ConstructorHelpers::FObjectFinder<UStaticMesh> SphereVisualAsset(TEXT("StaticMesh'/Game/StarterContent/Shapes/Shape_Cube.Shape_Cube'"));
    if (SphereVisualAsset.Succeeded())
    {
        CubeVisual->SetStaticMesh(SphereVisualAsset.Object);
        CubeVisual->SetRelativeLocation(FVector(-200.0f, 0.0f, 100.0f));
        CubeVisual->SetWorldScale3D(FVector(2.0f));
    }
    // Create a material to be applied on the StaticMeshComponent
    static ConstructorHelpers::FObjectFinder<UMaterial> Material(TEXT("Material'/Game/StarterContent/Materials/M_Tech_Hex_Tile_Pulse.M_Tech_Hex_Tile_Pulse'"));

    if (Material.Object != NULL)
    {
        TheMaterial = (UMaterial*)Material.Object;
    }

    CubeVisual->SetMaterial(0, TheMaterial);
}

This was the headerfile:

UCLASS()
class MYPROJECT_API ADynamicMeshSpawner : public AActor
{
    GENERATED_BODY()

public: 
    // Sets default values for this actor's properties
    ADynamicMeshSpawner();

    // Called when the game starts or when spawned
    virtual void BeginPlay() override;

    // Called every frame
    virtual void Tick( float DeltaSeconds ) override;
    // Pointer to the material that needs to be used
    UMaterial* TheMaterial;
};

Thats pretty much it!

 

Converting from Decimal To Binary

I was on StackOverflow the other day and saw a question posed about how one might convert from Decimal to Binary, when the initial information is stored in a string. It seemed like a fun little program to take a whack at, so I did. I’ve posted my answer as well as the code solution below:

Image obtained here:

http://pictures.4ever.eu/cartoons/binary-code-161219

The original question and my answer can be found here:

http://stackoverflow.com/questions/34381002/is-there-a-way-to-convert-a-number-represented-as-a-string-to-its-binary-equiv/34381419#34381419

————————————————————-

Okay let’s break down the process you require here. (only one of an infinite number of ways to do this)

1) Conversion of a number represented as a string type into an integer type.

2) Conversion of the intermediary integer type into a binary number which is held in another string type. (judging by the return type of your function, which could just as easily return an integer by the way and save the headache of representing the binary equivalent as a string)

For step 1: Use the standard library function stoi. It does what you might imagine, extracts the numerical data from the string and stores it in an integer.

http://www.cplusplus.com/reference/string/stoi/

std::string numberstr = "123";
int numberint = std::stoi(numberstr);
std::cout << numberint << "\n";

Now you have the number as an integer.

For step 2:

1) This process involves the conversion of a number from base 10 (decimal) to base 2 (binary).

2) Divide the number by 2.

3) Store the remainder and the quotient of this division operation for further use.

4) The remainder becomes part of the binary representation, while the quotient is used as the next dividend.

5) This process repeats until the dividend becomes 1, at which point it too is included in the binary representation.

6) Reverse the string, and voila! You now have the binary representation of a number.

7) If you want to handle negative numbers (which I imagine you might), simply perform a check before the conversion to see if the converted integer is negative, and set a flag to true if it is.

8) Check this flag before reversing, and add a negative sign to end of the string before reversing.

The final function looks like this:

std::string str_to_bin(const std::string& str)
{
std::string binarystr = ""; // Output string

int remainder;
int numberint = std::stoi(str);
bool flagnegative = false;
// If negative number, beginning of binary equivalent is 1
if (numberint < 0)
{
    numberint = abs(numberint);
    flagnegative = true;
}
// If number is 0, don't perform conversion simply return 0
if (numberint == 0)
{
    binarystr = "0";
    return binarystr;
}
std::cout << numberint << "\n";

while (numberint != 1)
{
    remainder = numberint % 2;
    numberint /= 2;
    std::ostringstream convert; // stream used for the conversion
    convert << remainder;      // insert the textual representation of 'remainder' in the characters in the stream
    binarystr += convert.str();
}
std::ostringstream final;
final << numberint;         // To insert the last (or rather first once reversed) binary number
binarystr += final.str();
if (flagnegative == true)
    binarystr += "-";
std::reverse(binarystr.begin(), binarystr.end());
return binarystr;
}

Procedural Terrain Generation

Hello there!

It’s been a good long while since I’ve last posted, but I’ve been incredibly busy with college. Have my finals going on right now, almost done with my bachelor degree in computer science.

I’m going to start by just putting this incredibly pretty terrain here:

Output

If you’re willing to read it to the end, this post will teach you how to create this sort of terrain using C++.

I’m aware that there are programs that can achieve the same result available for free such as L3DT (which is actually used in this project), which are much better than this humble program and much more convenient to use, but I’d like to imagine that this simplifies the process of procedural terrain generation a little bit by reducing the complexity required to understand how to operate L3DT. If anyone is a beginner like me to the topic of procedural generation, perhaps this will be of help to them as well.

This is a simple console program that allows the user to create heightmaps. A heightmap is a file that can store surface elevation values, which is a complicated way of saying it stores height information. This height information can be used to model terrain in 3D game engines and some 3D modelling software.

What I’ve created was actually done for my final year project. And to give credit where it’s due it’s mainly based off a paper written by Jonathan Roberts and Ke Chen, Senior Members, IEEE

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6853332

The above is a link to the abstract and IEEE page of the paper

My project can be summarized simply as:

An attempt to implement the Perlin noise algorithm and machine learning concepts and use them to develop a tool that allows even a layman to generate procedural terrain that can be exported as a Photoshop RAW file and used as a heightmap in any game engine such as Unreal or Unity.

To break that down:

1) Perlin Noise: A nifty algorithm created by Ken Perlin, this allows for the creation of noise that has a natural, organic and non repetitive appearance, which gives it a resemblance to a variety of natural objects such as clouds, fire, and terrain. This means Perlin noise textures can be used as heightmaps. I have used the excellent open source library Libnoise to implement Perlin noise.

https://mrl.nyu.edu/~perlin/ : Website of Ken Perlin.

2) Machine Learning: A method used to impart pseudo-artificial intelligence to the program by allowing it to take in input (feedback of user) which alters the output. Every iteration of feedback input improves the results of the previous stage. It does this by identifying a favorable content category and constraining future output to that category.

3) Exporting the heightmap as a Photoshop RAW file: Using the amazing tool L3DT(Large 3D Terrain Generator) to view the output of the terrain generation, user can decide if it matches their needs. User is presented with option to finalize output, modify it some way (more mountainous, flatter, hillier), or accept the output but request a refinement. When they choose to finalize, the heightmap is converted to Photoshop RAW format (again using a script in L3DT, all credit to Aaron Bundy). This RAW file can be imported into Unreal, Unity i.e. level design softwares.

Apart from the above, I’ve also used the SFML libraries to create the GUI that displays the noise texture for user approval.

So in order to make this work, you’d have to integrate the libnoise and SFML libraries into your project environment to successfully compile the source code given below.

Feel free to clone the repository I made on GitHub using the following link:

https://github.com/Nightmask3/Perlin-Heightmap-Builder

Texture

Above are pictures of a Perlin noise texture generated by the program.

Once the heightmap has reached the satisfaction of the user, it can be exported in Photoshop RAW format, upon which it can then be imported into game engines like Unreal and Unity to be used to create terrain.

This is a student made project and it’s bound to have some bugs here and there, but I believe it is a good first foray into procedural generation, which is a topic I shall definitely be pursuing in the future.

Run Length Encoding


Run Length Encoding is a method of data compression which involves reducing the length of the encoded data by replacing repeated elements in the data sequence with a shorter representation in order to save space.

To give you an example, quoted directly from the January 2015 issue of OpenSource as a coding problem,

“You are given an input string and asked to write a code snippet to return the run length encoded representation of that string. For example, if you are given the string aaaccefhhh, your function should return the string a3c2e1f1h3.”

Notice that the numbers after each alphabet in the output string denote the number of repeated alphabets in the input string sequence. This is Run Length Encoding.

I decided to write this program after a friend challenged me to see if I had the testicular fortitude at coding.

I think it’s fair to say I proved my point. Here’s the program, and it’s free to use for whatever purpose you might have in mind, all I ask for is credit to the original author if the situation permits it:

 import java.io.*;
 import java.util.*;
 
 public class RunLengthEncoding 
 {
     public static void main(String args[]) throws Exception
     {
      String str = new String(); // String variable to hold the value of the string to be encoded.
      String rlestr = new String(); // String variable to hold the final RLE encoded string.
      Integer rlecount = 1, totalcount = 1; // The integers holding the RLE values and the value of the currently under analysis string variable.
      String ch = new String(); // Holds the value of the character in the run.
 
      BufferedReader br = new BufferedReader( new InputStreamReader(System.in) );
      System.out.println("Enter the String to be converted to RLE Format:\n");
      str = br.readLine();
      for (int i = 0; i < str.length() - 1; i++) 
      {
          ch = Character.toString(str.charAt(i));
          if( str.charAt(i) == str.charAt(i+1) ) // Checks whether two consecutive letters are equivalent, hereby creating a run.
          {
              rlecount++;
              totalcount++;
           }
           if ( !Character.toString(str.charAt(i+1)).equalsIgnoreCase(ch) ) // Checks when the end of the run occurs and appends to the output string.
           {
               rlestr = rlestr + ch;
               rlestr = rlestr + rlecount.toString();
               rlecount = 1;
               totalcount++;
           }
           if ( i == str.length() - 2 ) // When the end of the input string approaches, it performs the final append operation and breaks out of the loop.
           {
               ch = Character.toString(str.charAt(i+1));
               rlestr = rlestr + ch;
               rlestr = rlestr + rlecount.toString();
               break;
           }
       }
 System.out.println("The RLE encoded String is:");
 System.out.println(rlestr);
 }
}