Cities and civilizations need resources to survive and thrive. Our case is no different. So, before placing any cities, we need to generate the resources that the world uses. I’ve decided to split the resources into 3 groups: food, basic materials and rare materials: the first two are found in varying quantities pretty much everywhere. Civilizations can immediately use such resources. Rare materials, on the other hand, are not easily found, they need to be discovered first, and they also need to be mined. On the plus side, there will be enough incentive to explore, discover and mine such materials (wealth, influence, advanced structures and items, etc).
From a macro point of view, each tile of the overworld has a the following resource information:
Food: Used to feed population. Obtained from sources such as vegatation and wildlife. Value: [0,255]
Basic materials: Used for buildings and construction of everyday items. Obtained from environment. Encompasses materials such as stone/leather/wood/iron. Value in [0,255]
Rare materials: Special, difficult to find/mine materials, used for magic and/or construction of advanced buildings/items. Examples include silver, gold, mithril, crystal, gems, etc. A value of [0,1] per rare material type.
So, each tile resource can be represented with a 32-bit value: 8 bits for food, 8 bits for basic materials and 16 bits for rare materials (for a maximum of 16 rare materials). Several rare materials can co-exist at the same tile.
Rare material traits
A rare material, with regards to generation of a map that contains them, has the following main traits:
BiomeSuitableRange: This is a collection of ranges per biome parameter, e.g. temperature, humidity, elevation, water type, etc. So, for example, some materials can be spawned only in high altitude areas of extreme cold, etc.
Chance: This is the chance of a rare material spawning at a suitable tile. So, the effective spawning chance is chance multiplied by the probability of a tile being suitable.
Tile resources map generation
In order to generate this “tile resources” map, we need to have calculated the biome map first.
The first step in the process is to calculate all the candidate tiles per rare resource type. At this stage, we also calculate the food and materials per tile, as a function of the biome. I’m currently using an extremely naive mapping of wildlife density to food and vegetation to materials, but that should change later on.
We then shuffle the candidate list and pick the N first points, where N = max(chance * totalCandidateNum, min(1,totalCandidateNum)). So, if we have no candidates, we won’t generate any. If we have at least 1 candidate, we should generate at least one point. And that’s it, really! Pretty simple, but does the job. Here’s an example of a rare material’s distribution; there only tens of them in the whole map, so it could be a quite coveted material to be able to mine and get access to.
My goal is to generate an overworld map, where each tile would cover an area of about a hundred square km (On normal terrain, a regular unit would need a day to cross a regular tile). The overworld needs to contain islands, continents and biomes. The output of this process is a 2D “image”, with data per pixel (32 bits, like an RGBA PNG file) that completely describe how is the environment of a tile like. I’m going for plausible rather than realistic: I want to be able to create maps that are fun to play. Below, I’m going to go through the various steps of the process that I use. All but the landmass labeling and river generation passes are generated using the GPU, as the calculations are typically parallel. The whole process takes about 60 milliseconds for 512×512 maps, so we can tinker all sorts of parameters and see the results in real-time.
The first step is the creation of the seed continents. These are not necessarily the final continents, but they help construct the base for the big landmasses. The continents start off as a small set of scaled and rotated ellipses. Everything about these ellipses is randomized: number, scale, rotation, eccentricity.
The next step is to distort the boundary of the ellipse using perlin noise. Effectively, we’re warping the point we’re on before testing whether it’s inside or outside one of the ellipses. There are two parameters for this: warp frequency (how much can the warp differ between 2 adjacent pixels) and warp magnitude (how far the warped point can get from the original). Some examples of increasing frequency:
For the rest of the post, let’s choose the one before last. At the end of this stage, we have a map that stores if we’re inside or outside big continent-like landmasses
Continent mask distance field
This step calculates a limited distance field around the coastline of the continents: this will be useful for the actual heightmap generation. We calculate distances from the coastline (d = 0) up to a number of pixels away from it (e.g. d = 32) and we map the values 0-1 to this clamped distance range.
This step calculates an 8-bit heightmap with values [-1,1], positive numbers representing land. We don’t care about it looking too realistic, as the heightmap will only be used implicitly, as an input to other parts of the generator.
This step creates the final landmasses. We’re just using the heightmap to generate this, comparing the height values against 0.
Landmass distance field
This step does the exact same process as the continent mask distance field, but on the landmass mask.
This step does a floodfill over the heightmap, detects landmasses, classifies them in terms of size (rocks, islets, islands and continents) given user-defined area thresholds. There can be a maximum of 63 continents given the current bit budget, but of course that’s flexible. The continents are also uniquely labeled at this step (this means that all the tiles that belong in continent 2, store the value 2 somewhere — see below, Biome data section). Additionally, bodies of water that are completely enclosed by landmasses are marked as part of the landmass, so that they can correctly be identified as lakes later on.
This step generates the rivers in the overworld. Effectively, give some parameters such as minimum river proximity to each other and river min/max length, we generate rivers. The way this is done is by sampling random points on the map and testing if they can be appropriate starting locations (e.g. on or by a mountain). If a point satisfies the conditions, then a path is attempted to be generated, with branching; the path follows a downward path in terms of heights till it reaches a lake, the sea, reaches maximum length, or can’t go further due to any reason. Below two examples with different density:
This step generates the humidity for each tile. It takes into account outline, heights and freshwater. The basic map is calculated with perlin noise, but it is also adjusted based on if a tile is water or land: areas in and near water are more humid. It is also affected by the the freshwater mask, which gets heavily blurred and added as extra humidity; this guarantees that there almost never are rivers in the desert, or swamps without a body of water nearby.
This step generates the temperature for each tile. It takes into account outline, heights and freshwater as well. The basic map is calculated with perlin noise, but it is also adjusted based on if a tile is water or land: when on land, we sample from a heavily blurred heightmap and reduce the temperature based on that regional average height. This reduces temperatures in regions where there are a lot of high mountains. Additionally, the regional existence of water reduces temperatures a bit.
At this point, we’re almost done! This step samples all maps and packs them into a 32-bit output map. These 32 bits encode the biome detail in a coarse way.
Here’s the breakdown:
Temperature: 3 bits
Humidity: 3 bits
Elevation: 2 bits // Height or depth, dep. on water type
Water type: 2 bits // none, river, lake, sea
IsCoast: 1 bit
Vegetation density: 3 bits
Wildlife density: 3 bits
Continent ID: 6 bits
Landmass size: 2 bits
Biome type: 4 bits // one of the 16 predefined biomes
Padding: 3 bits
For many of the above (temperature, humidity, elevation), we quantize the (typically 8-bit) data that we already have to the bits above. The biome type is calculated from the rest of the values (temperature, humidity, etc), and is one of the following:
WaterType: Calculate based on if it’s a river tile, landmass ID and height.
IsCoast: Calculate based on if we’re on land, and sample all neighbours for any sea tile
Vegetation density: More perlin noise, adjusted by humidity, temperature and height
Wildlife density: More perlin noise, adjusted by humidity, temperature, height, freshwater and vegetation
Here’s a visualization of the vegetation density:
… and the wildlife density:
Depending on the biome type we can distribute flora, fauna, resources, civilisations, etc.
Here’s a video of the GUI tool in action!
Other examples here:
The format might get adjusted in the future, in order to use those padding bits to encode some extra information, for example freshwater direction in river tiles (2 bits). There is also a dynamic magic map which specifies, in areas of high magic, special happenings such as rifts, summons, portals, etc. Additionally, there’s tile resource generation which will be covered next time.
Last time I gave a brief description about how messaging (and my dirt simple implementation) can help with decoupling. But of course that was just scratching the surface. So, in this post, a bit more information on how the whole system is put together
The messages now can also store an explicit message handler. In terms of the example I used last time, the new message would be as follows:
struct cEntityCreated : public cMessage
explicit cEntityCreated( const cEntity& zEntity, const cMessageHandler * handler = nullptr)
const cEntity& mEntity;
static const int msType = 2;
So, a slight change allows cases where we’d like to target a message to a particular handler. This would be useful in the cases where we want to directly affect something from another part in the code that we don’t want coupling with, but we don’t want to introduce abstraction layers. Example:
My test rendering app needs to modify a renderable directly, by setting a bunch of tiles. One option is to introduce a new message, TilesChangedInRenderable( tiles, renderable), but then we have a TilesChanged(tiles) message AND a TilesChangedInRenderable(tiles, Renderable). To avoid doing the same thing with classes other than Renderables, and since the Renderable is a MessageHandler anyway, I decided to make the above adjustment where we can always optionally provide an explicit handler; if one is provided, the message is only handled by message propagators (e.g. a System) and the handler in question, otherwise it is handled by everybody who is registered to listen to those types of messages.
Disclaimer: Rendering is always in flux – I’m trying to get something generic, extensible and easily editable working together, and it’s no easy feat.
Summary of rendering so far:
The currently running application state renders its root widget
Each widget contains body and margin renderables (2 different)
Each widget can contain a modal widget, or if it’s a container widget, other widgets
Some widgets add more renderables: e.g. textbox also has a text renderable
Renderables are pretty much rendering configurations, and store a reference to a renderer and to their widget owner
Renderers use shaders and contain rendering logic
A renderer renders a single renderable type, a renderable can be rendered by several renderer types
Before, the configuration was via explicit parameters in an inheritance chain. While it’s explicit, it’s a PAIN to add parameters, as it’s compile-time. So I ditched that approach, and used a far more generic approach. Now every renderable stores, among other things:
A list of references to textures
A list of dynamic textures, along with a message updater for each
A list of texture buffers, along with a message updater for each
A reference to a blending configuration
A list of shader variables, organized as:
a vector of (name, value) pairs, for every common shader type (int, float, int2, float4, etc)
a vector of (name, texture_buffer_index)
a vector of (name, texture_index)
a vector of (name, dynamic_texture_index)
So far, this is looking flexible and I like it. Of course it’s far from optimal, but it is optimal for prototyping, and that’s what matters now. For performance, variables could be organized in uniform buffer objects of varying frequency of updates, etc, but that’s far down the line.
Above there’s a screen from the modification of the A* visualizer to operate on graphs — just minimal changes needed from existing infrastructure:
There is a new renderer instance of the type GridSparseSelectionRenderer — it’s used for rendering lines.
There are a few renderables: for the node points, for the start point, for the goal points (of course horribly inefficient, I might as well draw all points at once and assign per-instance colors, but that’s not the point here), for the edges and for the edges that are part of the output path.