Level generation I: Layout

After a break working on other things (and playing games for research purposes 🙂 ), at about July I started working on level generation, as an exciting way back in to development. Exciting does not mean short, however, as I have certain expectations from the level generator to be quite capable and flexible, instead of just generating a “caves” level, a “dungeon” or a “forest”. As a result, 6 weeks later, I’m nowhere near done, but some parts now do work, and this is what this and following posts will be about.

Parts of level generation

Levels are not generated in a single pass; the process is split into the following (tentative) order:

  • Layout: Passable/impassable areas, rocks, trees, doors, water, open sea, etc. Where creatures can move.
  • Gateways: Staircases, etc. How to connect one level to another
  • Foreground objects: altars, fountains, bookshelves, tables, etc.
  • Traps, puzzles, secret doors
  • Creatures
  • Decoration: Grass, cobwebs, carpets, etc

The order, while not final, should reflect some form of logical dependent order, for example layout comes first, gateways need a layout, foreground objects need a layout and also need to avoid staircases, traps are based on layout and foreground objects, creatures need to be generated based on the dungeon layout and also need to be aware of traps and maybe foreground objects, and finally decoration is applied to layout but needs to know about other elements, such as gateways and foreground objects, that exist on the level.

Requirements and use-cases

Example levels should contain:

  • Building interiors ( manor, guild hall, temple complex)
  • Buildings outdoors (village, ruins, cabin in the woods)
  • Outdoors environments (forest, desert, sea coast, and generally biome-themed areas)
  • Typical dungeon
  • Cave complex

Zones: The ${dungeon_type} in the ${biome_type}

One of the goals of the level generation system is to be able to utilise multiple generators for single level, so that level separation is not always artificial.

An example is the first level of a wizard tower in some location. The tower can be located in the swamp (don’t ask me how they built it, it’s probably not too structurally sound), in the snow, near the sea, in the desert etc. In order to add some sense of exploration, we want the player to wander a bit in the biome to find the tower. So, we need a biome-specific generator, depending on our location. When we find the building, instead of having some sort stairs to change to a different level, we can generate the first level of the tower embedded within the biome map. So you find the tower, open the door, get in and explore the ground floor of the tower, all in the same level. To achieve this, we need 2 different generators, the biome generator and a “wizard tower ground floor” generator, and embed the latter into the former and make sure the entrance is accessible and makes sense. The areas belonging to different generators will be called from now on zones.

Masks: Controlling the shape of zones

Instead of using simplistic rectangles to differentiate between areas, we can use some more freeform masks using noise and cleaning up the shape by removing islands, or even custom ones. As a result, we get zones like this:

Coastal map (red) with lighthouse (green)
Outdoors (red), mountain (green), dungeon (blue)
Outdoors (red), mountain (green), temple (blue)

Each zone uses its own generator, and we just need to make sure that the zones can be connected. Zones can be “open” or “closed”. Open zones are ones where the default location is passable, for example outdoors. Closed locations are ones where the default is impassable, such as caverns, dungeons, etc, as these are spaces carved into rock. We have to worry about connecting zones if at least one of them is closed.

Prefabs: Handcrafted areas in procedural levels

Procedural generation nice, but sometimes we want to place particular, handcrafted rooms and areas and mix them with procedural elements. Examples:

  • Boss lair: specially designed room, but we shouldn’t know where to find it
  • Fortress entry hall. If we enter the fortress we’re in that room, but we don’t know the rest of the layout
  • Fortress prison. It has a fixed layout, but we don’t know where it is.
  • Glade in the forest. We want a small area in the forest to have a very particular look (special placement of lake, trees, ruins), but we won’t know where to find it.

In the example images in this page, I use a small number of prefab rooms:

Entry hall: When I use this, it’s always the first room to be connected to a connecting zone, via the door on the top. The doors appear exactly at that location, and we won’t have any more doors.

Entry hall

Lair: When I use this, I ensure that it’s never connected to a different zone, so all doors lead somewhere in the same zone. The doors appear exactly at that location, and we won’t have any more doors.

Lair, in case it wasn’t clear

OpenArea: This is an example of an area that is open and has a few things in.

Open area

Not fancy, but serving as proof of concept.

Configurability

A very important part of the process is being able to configure the generator using simple data structs, loadable from configuration files (or dynamically created configurations). This “heterogeneous” generator was designed with that in mind, so that creating permutations is easy, although there are still quite a few parameters to set up. In this page I’m using 3 presets:

  • The dungeon under the mountain V1: An outdoors map, with a cavern system on the left side, that leads to a dungeon map
  • The dungeon under the mountain V2: An outdoors map, with a cavern system on the left side, that leads to a temple. The temple generator is different than the dungeon generator in V1, and uses a dragon-shaped mask, so that any rooms are forced to be confined in that shape
  • The lighthouse by the sea: An outdoors map, bordering with the sea, that contains the lower level of a lighthouse: a circular tiny “dungeon” which is effectively a few rooms (maybe even one) and a staircase. Players might or might not traverse water to get to the lighthouse.

Examples

Below are some examples, 10 per preset. Using basic tiles and minimal decoration, more on that in another post.

Porting to Unity IX: Overworld Props

This wraps up the overworld map graphics for now, as it did in the last series. Again I’m temporarily using HoMM 3 assets for demonstration purposes, but the concepts are generally applicable. The prop texture atlas contains subtextures that are multiples of 32, so I’ll call that a prop tile. E.g. some mountain ranges occupy 6×4 prop tiles, while a single tree stump would occupy 1×1 prop tile. The parts that have been changed since last time are steps 3 & 4: placement and rendering.

Procedural placement

Placement is done as a series of steps:

  • Place high mountains
  • Fill high mountains
  • Place mountains
  • Place vegetation
  • Place misc props

Placed props may overlap if this is supported (see composition group in above linked post). All of the steps follow a similar pattern:

  • Generate a step-specific set of candidate tiles to place the props
  • Select a step-specific subset of props to be placed
  • Run the placement process, checking composition groups, neighbour tiles, compatible biomes, etc

When we’ve placed everything, we do:

  • Sort from top-to-bottom of the map so that props in the foreground occlude props in the background (tiles further up/back).
  • Remove props that are completely covered by other props
  • Generate a 2D texture array that stores placement info (for each tile, which corresponding prop tile from the atlas we should be rendering)

Here are the results of the placement algorithm:

And here’s a super-resolution image of the map

Porting to Unity VII: Sprite rendering

It’s fun to run character simulations and generate results, it’s even more fun when you can actually see the characters. These past two weeks I’ve been writing the sprite rendering system, that doesn’t utilize gameobjects/monobehaviours, or unity spritesheets/texture atlases etc, it’s all homemade. So, how is it done?

Instancing

For sprite rendering, there’s no alternative, plain and simple. We have a very simple geometry (a quad), and we want to render N instances of it, each with its own parameters. The main parameters, for sprites, are:

  • Rect location in texture atlas
  • Number of animation frames (stored sequentially in atlas)
  • Current location in the world
  • Previous location in the world (moving sprites only)
  • Time at previous location in the world
    (moving sprites only)
  • Movement speed (moving sprites only)

From the above one can see that for moving sprites we need more data, actually it turns out to be twice as much. We can also reason that the list with the moving sprites needs to be updated far more often, due to the changing positions. Therefore, it makes sense to have two separated entity sprite lists, the static and the dynamic, each of which contains instance data that will be rendered with Unity’s DrawMeshInstancedIndirect function.

Entity sprite list maintenance

  • Entity enters map: add its default/idle animation to the static list
  • Entity exits map: remove animation from static and dynamic lists
  • Entity teleports: remove animation from dynamic list,
    add it’ default/idle animation to the static list
  • Entity moves to adjacent location: remove animation from static list, add its moving animation to the dynamic list
  • Every several frames, we go through the dynamic list and check if an animation has finished. If it has, we remove it from the list and add the entity’s default/idle animation to the static list

For the above, when we add an animation, we also set the instance data to a CPU buffer. When we render, the CPU buffer contents are copied to a GPU ComputeBuffer (if anything has changed), which is sampled in the shader

Here’s an example where every second, we schedule a movement to 10,000 entities. Runs pretty well except the last bullet point above, which I’ve disabled for this video. The video also shows an additional GUI sprite rendering layer with highlighted tiles (yellow squares) and the hovered tile is shown via a greenish sprite.

Porting to Unity VI: RPG elements, Time

This week I ported another bulk of data, namely RPG-system related stuff such as attributes, skills, skill categories, skill mastery levels and level-up strategies. The data all became scriptable objects (as they’re constant assets), and all except the level-up strategies were enum-ified as it is quite likely that in the code I’ll be referring to particular skills/attributes/etc (e.g. if skills[dexterity] > 10, do something)

I’ve added a much-needed global object with all the game configuration parameters (e.g. max character levels, skill points per level, etc), while in C++ it was pretty much a dictionary loaded from json to avoid recompiles, here I have it as a scriptable object with fixed fields, so access is faster, compiles fast and data are interchangeable.

Another addition is a very simplified form of date/time. I opt out of using C#’s DateTime or TimeSpan as they are for real-world time and therefore do not allow flexibility. What I’m using instead is a simple long value wrapped in a struct (called TimeUnit), representing microseconds. The struct stores loads of constants and helpers for printing and manipulating, so that in the end a TimeUnit is just an integral type with benefits. 64 bits support quite a bit of a range, so it should be enough for the game’s time system resolution.

Porting to Unity V: Danger!

Since last time, I’ve tackled (or not) a few issues.

NO gameobjects/monobehaviours in the game database / ECS.

First and more importantly, I’ve made the decision to represent entities with a custom, very lightweight class, that does not derive from any Unity stuff. Reason being that I don’t want to interact too much with Unity, especially derive from these classes that carry a lot of baggage. I’m pretty happy to manage entity lifetimes and bite the bullet in terms of inspector “niceties”, rather than start using GameObjects and then, when these are tangled deep in the game code, I realize that it’s actually a terrible idea and it gets too slow. So that’s that.

Entities are represented now with custom classes, and to avoid potential infinite serialization depth that Unity is not happy about, I use weak references. So, instead of:

city entity -> city component -> owned mines (entities) -> mine entity -> membership component -> owner entity -> city entity -> …

… which causes an infinite loop, I use the following:


city entity -> city component -> owned mines (entity references)

so the chain is broken there. The references are just IDs that look up in a pool. So that’s that. In the meantime, I’m trying to make the inspector show in place of the entity references the entity contents, but currently I’m not doing a good job. Revisit later.

Danger maps

This is simply the danger map calculation (what level of encounters one might find at a certain point on the overworld map), which involves distance field calculations to cities and routes and some poisson sampling, both of which I’m calculating in C++ via a native plugin. So, performance remains nice, which is nice. Here it is, alongside the biome/routes map. Red is dangerous, blue is safe

Visualizations

Implementing visualizers in Unity is a breeze, and I love being able to see what I hover over and see data, data, data. In terms of pixels or values in the inspector. I captured a video to show how this looks like, I love it 🙂

RPG

Last but not least, I ported the RPG-related datafiles (skills, attributes, skill masteries, etc). So, next work is going to be adventure location generation, and maybe character generation and levelling.

Porting to Unity IV: Pathfinding


Fairly straightforward work this week, porting pathfinding code from C++ to C#. The first two ports were of the basic low-level pathfinding functionality, namely A* on a grid (SearchGrid) and a graph (SearchGraph). This is the code shown in this post

The other ports were of the higher level wrappers that utilize both of the above. There’s a “basic” wrapper and an “overworld” wrapper. The basic wrapper is used as a reference to evaluate other wrappers and uses a dense grid, so it takes the longest but it would return the best results.

The overworld wrapper is the one in this post.

The performance currently, without any sort of profiling, is about 1ms per overworld path, in the Unity Editor where I’m working all the time. Compared to 0.22ms for Release/C++ and 3.37ms for Debug/C++ it’s not bad at all!

Porting to Unity III: Caching 2D Arrays

So far, I’ve ported the biome map, resource map, city basics, faction generation, and a few more things. They all pale to the effort that I’ve put for a seemingly simple process:

“Ok, we generated a large 2d array of data after a lot of calculations, now how do we save and load it from disk?”

There are a lot of issues, pitfalls, etc that resulted from this question and associated actions, that have lasted for almost 2 weeks. So:

How do we represent Array2D?

Question number one. Enchanted by the simplicity of SomeType[,] , I started using that. There are a few issues:

  • Serializers have issues with multidimensional arrays, there’s a zoo of serialization options out there, each with their own little problems and idiosyncracies. Ugh
  • Indexing is [y,x] like Matlab (and I presume other things). I can understand why, but for my computer graphics brain and muscle-memory, this is a no-no (I’ve been always using [x,y] and [width,height] ) and could be a cause for a billion bugs.
  • Cannot cast to 1D array, so operations applicable to 1D arrays are not applicable here. This would mean re-writing code for 2D case ( e.g. max element in array, max value in array, etc etc)

So, I decided to nip it in the bud early and use my own class, which is a simple wrapper of 1d array, and thus can play along with everything. Hooray. For access it uses [x,y] or [Vector2Int].

Array2D for structs and classes

Yet another potential cause with structs or classes. Everybody keeps hammering “use structs for immutable types” and for a good reason. For my simple Array2D wrapper it’s impossible to do the following with a struct Foo:

var foo = Array2D<Foo>(2,2); foo[0,1].someField = 32; // Forget about it.

What we’ve done here with foo[0,1] is return a copy of the element and set .someField to that copy, that will be immediately destroyed. We can’t set just someField, we need to replace the entire struct, because we can’t return references to structs in Unity C#. So, the best alternative is to disable this altogether otherwise hello insidious bugs. We can add an extension method for the array ( e.g. at( int x, int y) ) that restricts the type to be a class and returns a reference.

How do we serialize Array2D?

That’s easy, right? Use Unity facilities, add a few [Serializable] attributes and we’re good to go. Well, not so easy. What-a-waste. So using standard serialization facilities, we’re serializing every object individually, and that contains a crapload of redundancies. So, a simple 512×512 map of uint32 takes 12 megabytes instead of 1. Lovely. The “Biome” map class that uses enums (it used to be bitfields) now takes 60 megabytes. No thanks. I went on a journey to find a nice way to do cheap serialization for at least these use-cases, without using some XYZ serialization library that will be broken with the next Unity update or will go unsupported, or has those other weird limitations. Therefore I wrote some code to be able to serialize classes into a stream of bytes, this being a pain on its own. Error prone? Yes, but at least testable. Fast? Oh yes. Space-efficient? Oh yes. Using the binary serializer, I jumped from 60Mb down to 1MB, and the serialize/deserialize performance improved by 30x. So, back to reasonable-land again. The only “issue” Is that I have to implement an interface for every new type that can be used in an Array2D that’s going to be cached, so no big deal really.

Random: System or Unity

So this is the last thorn, I started using Unity’s as it’s usable, but there’s only a single one so that’s not future proof, as I’d like to have a bit more control (maybe multiple rngs, alternate algorithm, etc). So, I started extending System.Random with convenience functions. So far so good, but I ended up with a several-hour bughunt where I’d create a copy of the rng, replace the original during a caching operation, and then using the (by now unique) copy, leading to different behaviour with seemingly same parameters. Takeaway: don’t make copies of the rng, get refs to it.

Next TODO

Next in the port party is Pathfinding, Movement, Territory systems and some visualizations to make sure everything is working as expected

Porting to Unity II : ECS basic form

Immediately after the biome and resource maps generation, next is the city placement. We’re getting near the first “interaction” with the entity system, as at the end of this step, we should have city entities. Therefore, it becomes apparent that there needs to be a basic ECS machinery in place.

ECS using ScriptableObject

Doubt No 1: Custom machinery for efficient component access.

In the C++ version of the entity system, each component type had their own storage vector, and the entity system did the following to access a component:

  • Use the entity ID to look-up its component-index data, and the required component ID to look-up if the entity has valid index for the component, and if it does, use the retrieved index to look-up in the component storage

So this does involve a few lookups/jumps, so I wouldn’t call it too optimized or cache-friendly.

Doubt No 2:  Entities adjust their components at runtime.

Typical ECS allows for runtime composition of components, using the aforementioned machinery. Adding a component is as simple as allocating a component and storing the index in the “component index” entry of the entity. Question is, when do we need that? Answer is, depends on how granular the components are, and how dynamic some of these component types can be. From my theoretical design so far, adding components dynamically is a solution to a future problem, while it creates other problems now. Problems such us opportunities for nonsensical component combinations (city having a character sheet, adventure location having movement component, etc).

Components as ScriptableObjects

Reading about Unity one reads a lot about “the tyranny of MonoBehaviour” and how heavyweight MonoBehaviours are. At the same time, it looks like the consensus says “Use the fancy new scriptable objects”. Therefore, wanting to go the suggested route, and because components should just store data, ScriptableObjects become a quite reasonable solution (so far) to that issue.

Entities as ScriptableObjects, interfaces and components

C# is an interface-happy language, so I decided to go that route. Here’s the current line of thought:

  • The base Entity class now is a ScriptableObject
  • The entity-has-such-and-such-components composition is implemented via interfaces
  • There are subclasses like CityEntity, DungeonEntity, CreatureEntity, etc. These subclasses implement the interfaces they need

The Entity types and interfaces code is generated from a json file and looks like:


namespace cmp // namespace for components
{
    public class Location : ScriptableObject
    {
        public Vector2Int tile;
        public int mapId;
    }
    // more components ...
}

// interfaces for accessing components
interface ICity { cmp.City City(); }
interface IAdvLoc { cmp.AdvLoc AdvLoc(); }
interface ICharSheet { cmp.CharSheet CharSheet(); }
interface IController { cmp.Controller Controller(); }
interface IMembership { cmp.Membership Membership(); }
interface ILocation { cmp.Location Location(); }
interface IMovement { cmp.Movement Movement(); }

public class Entity : ScriptableObject {}

public class CityEntity : Entity, ICity, ILocation
{
    private cmp.City city;
    public cmp.City City() { return city; }
    private cmp.Location location;
    public cmp.Location Location() { return location; }
}

Now we can check if an entity called “someEntity” implements an interface to see if it has a location component simply by “someEntity is ILocation” The above code was autogenerated from python using a small dictionary:

entity_data = {
    "Creature" : ["Movement","Location", "CharSheet","Membership","Controller"],
    "City" : ["City","Location"],
    "AdvLoc" : ["AdvLoc","Location"],
    "Level" : ["Location"]
}

To wrap up with the ECS basics we still need to define the systems and where they live, as well as message handling, but that’s for next time. Also in the future I’ll possibly need to sort out the access so that’s it’s generally read-only, but that might be tricky.

Porting to Unity

Over the holidays, I’ve been thinking about the party system and its complications, and tried to move on with more code. I caught myself, before doing any fun stuff, I still have to write some boilerplate code for class reflection ( print out some structs, read from json, serialize for save files etc). This is very cumbersome and it’s not automated, unless I’m only writing class definitions using code generators. I’ve actually written 3 code generators so far: for message definitions, action commands and components. So, the lack of reflection in c++ ends up hurting quite a bit, and I like data-driven development that is painful without reflection.

So, given that recently for work I started again with Unity (had some brief encounters before), I thought maybe Unity might solve the unfun issues, and of course I’m not fooling myself it will add other issues. So, I made a list of my woes with the current development “environment”:

  • Lack of reflection: reason for implementing (over years now) resource loading, serialization, streaming operators etc etc for every single class that I use. Using:
    • JsonCPP for loading configuration from json
    • Cereal for save/load functionality
    • Streaming operators for string formatting
  • Profiling: Visual studio profiler takes a while, I’d have to implement engine hooks myself for something faster & more bespoke. Graphics debugging is not great; apitrace is a nice tool, but still.
  • OpenGL is tricky to wrap. Very tricky. I foolishly didn’t use a wrapper, so minus points for foresight.
  • Cross-platform and web is tricky. And emscripten, as magic as it is, is not exactly trivial to integrate.
  • Compile speed (in my case) is not great. C++ supposed to be fast to compile, but I think with all the boilerplates and the suboptimal modularization of the code, it has become slow.

Unity is not supposed to be a panacea, but I think it’s going to offer a fresh perspective on how to build a game, same idea but different language, different approach for coding and data management, etc.

C++ with Unity, and a bit of woe

Since I quite like C++, I thought I’ll try to use it within Unity, in the form of native plugins. I ran a few experiments successfully, so, the question becomes, where to draw the line?

  • Nothing in C++: port everything to C#
  • A few things in C++: esp. algorithms with simple inputs/outputs that run complex/expensive calculations.
  • Most things in C++: This is an approach encountered here.

“Most things in C++” became a no-go quickly when I downloaded the latest github repo, ran the code and crashed. Since this is an experimental project and I don’t know much about C#, I think this would be more harm than good.

“Few things in C++” is the current optimistic scenario. If I encounter some code that fits the criteria, I’ll write the interop to C++. To test it, I ported part of the biome map generation code to utilize native plugins: C# runs some GPU passes, then somewhere in the middle we pass some pointers and config info to C++ that runs a CPU pass, returns the results and C# happily continues.

“Nothing in C++” is a probability. I’ve spent so-much-time dealing with Resource systems, serialization, GUI bindings for all types etc, and all these are for free with Unity’s inspector, C#’s reflection and Unity’s asset system. So, I need something refreshing at this point, and that’s game-related algorithms and content, rather than engine architecture.

The negative bit: I love bitfields and neither C# nor Unity makes my life easy with them. So all my careful bit-packing goes to hell. C# doesn’t support bitfields and Unity generates junk shader code for WebGL when use bitwise operators. This is a major pain, as many of my shaders use bitwise operators, and I can’t afford to go without. So, problem pending.

Closing with a port of the biome generator, using a native plugin for a few passes, I must confess that besides the learning curve and some of Unity’s idiosyncrasies, it’s been generally a breeze.