Autotiling Adventures Part IV: Mountains, trees and props

Previously I utilized HoMM 3 assets for extracting biome detail textures.

This time, the next logical step is to add more foreground detail, such as trees, rocks, mountains and other props. I continue to utilize HoMM 3 assets, as they work pretty well and are relatively close to how I’d like things to look.  Of course I’ll eventually have to make my own, as these are not mine, but that’s a problem for much, much later.

The pipeline from start to end for adding such foreground detail can be summarized by the following: First, identify the assets of interest. Then, pack them in a texture atlas and create an associated data file with per-prop information. Then, write the logic to place props on a map based on biome information. Finally, use the prop placement data to render the props onscreen.

Step 0: Tools

In order to process the assets, we need to understand them first. Kudos to the following, as without these I wouldn’t have done much:

Using the above, we can observe the assets and look at their properties (map editor), we can get the lists of all assets and asset types for the maps (MMArchive), and also extract all the images (python utilities)

Step 1: Identify assets of interest

This was a tedious process, going through each asset and identifying if it’s suitable to use as foreground detail. The source data were the following: 1) a text file with asset properties, such as suitable environments, asset type, image names, etc 2) a text file with all asset types 3) a large set of images

I’ve never worked with a tile-based engine before, so I observed several things that look like common sense:

  • Good overlapping look can be achieved when rendering props in the map top-to-bottom (makes sense as in  a top-down view, elements on the top are further back), right-to-left (no idea why)
  • Each asset is logically divided into 32×32 pixel regions, which I call subtiles. The subtiles are used in HoMM as single grid cells (e.g. a unit occupies a single subtile). Game data store movement blocking and entrance masks using subtiles. The maximum subtile number is 8×6, which means a 256×192 image.
  • Some multi-subtile assets could safely overlap, typically mountains and trees

Step 2: Generate texture atlas and data file

Having a list of assets of interest, we can now pack the images into an atlas and also save information per element. The information stored is:

  • General category, such as “Landscape features”, “Vegetation”, “Props”. Used for filtering assets.
  • Specific category, such as “Craters”, “Mountains”, “Trees”. Used for filtering assets.
  • Subcategory, such as “Oak trees”, “Rock”. Used for filtering assets.
  • Element ID, such as “avlmtdr2” (unique names used in the game data). Used for unique asset addressing
  • Composition group, such as “avlmtdr” (unique names without the number suffix, that indicates a group). Used to determine safe overlap of assets
  • Subtile num, such as [3, 4]. Used to determine the region that the asset covers
  • Subtile occupancy mask, 8×8 bits. Used to determine the per-subtile “logical” coverage
  • Subtile render mask, 8×8 bits. Used to determine if we need to render a subtile or not
  • Biome mask, 16 bits. Used to determine the biomes that the occupancy-marked tiles can be placed on

The difference between the occupancy and the render mask is as follows:

  • The render mask sets bits of tiles that contain at least 1 pixel with a non-zero alpha value
  • The occupancy mask sets bits of tiles that act as blockers in the map.

Here’s an example:

The red subtiles here mark the occupancy, while all other marked tiles (plus two unmarked with a bit of shadow) mark the renderable parts.

Below is a packed atlas using all assets of interest.

Some assets are animations, in which case we make sure they’re on the same line. I used this packing code. Atlas rectangles are named, and they use the Element ID as described above. One more interesting detail about the atlas is that the elements are packed in multiples of 32 pixels, so that means that I can have 6 mipmap levels and still not have any asset bleeding. I also generate a JSON file with the properties of each element.

Step 3: Place props based on biome data

Admittedly, I didn’t put a supreme amount of effort in placing thing. Little effort yielded reasonable results, so that’s ok for now. The placement is really simple. It is comprised of four stages: placing mountains, trees, props and cleaning up.

One important bit: Below, I use the term “map subtile”. I split the overworld map tiles to 2×2 smaller tiles: these (map) subtiles correspond to the size of the subtiles of the assets. An asset using 6×3 subtiles (192 x 96 pixels) will be mapped to 3×1.5 overworld tiles.

Placing mountains

First, we go through each subtile on the map whose elevation is high and we try to use it as a starting point for placing any of the assets marked as “Mountains”. The placement condition is that all subtiles with an occupancy bit set need to display biome compatibility based on the tile they’re on and the asset’s biome mask. We also prohibit occupied tiles being placed on river tiles, as the composition becomes quite difficult. Overlaps are allowed, as long as the overlapping assets are in the same composition group.

Placing trees

Next, we go through each land subtile on the map and we use the vegetation density as a probability value for if the subtile will be attempted to be used as a starting point for vegetation. This time, we filter the assets by only selecting ones in the category “Trees”. Biome compatibility and river prohibition are applied as in the case of mountain placement.

Placing props

Next, we generate a large number of random positions on the map, and we use them as potential starting locations for props. Yet again, we filter the assets appropriately to get the candidates of interest ( skulls, logs, stumps, reefs, etc) and we apply the same conditions as in mountains and trees: biome compatibility and river prohibition.

Cleaning up the data

Now we have a list of prop references and prop offsets per world subtile. We sort these so that the ones on the bottom left will be rendered last.  Additionally, for each prop, we identify if all of it’s occupant subtiles are 5 layers deep or more, and we remove those assets as they will never be rendered (we will only render up to 4 layers of such props). Finally, we build the GPU texture from the resulting data.

Step 4: Render props on map

The structure used for rendering is an “image” where each pixel stores information about what props need to be rendered where. Instead of having a vector of (atlas_element_index, map_location), which is not too GPU-friendly, especially if we have 100k props, we take another approach: For each subtile on the map, we store references to up to four subtiles of props. The required data per layer are the following, and they easily fit into a 128-bit texel (RGBA32).

  • Atlas element number of animation frames
  • Atlas element rect corner X
  • Atlas element rect corner Y
  • Atlas stride X ( when we have an animation, use the stride to jump to other frames)

The size of the rect that we render is conveniently constant: it’s the size of the subtile. This, appropriately optimized **should** be quite efficient, but alas, the rendering shader is very slow on an intel-powered laptop with an oldish card. But that’s a different story. Here’s a video with the results.

 

 

There’s still room for improvement (there always is), but I need to proceed to framework improvements, so for now, this is the sort of map visuals that will be used. Well, with less reefs for sure 🙂

Autotiling Adventures Part III: Detail biome textures and animated coastal waves

Previously I generated procedural masks for biome and rivers, but using constant color for each biome (river too). So, I ended up with nice outlines, but still the result was looking flat. So, I thought I’d add some procedural variation to the color using perlin noise. Needless to say, the result was underwhelming. So, after quite a bit of hunting, I rediscovered a website that I had stumbled upon years ago:  The Spriters Resource! What got me really excited back then (even though I eventually forgot) is that I found there tile art for, among other games, Heroes of Might and Magic 3!

Heroes of Might and Magic III

And, surely enough, found the section for the world tiles! (the bg folder in the zip file). Of course, these are commercial assets so I can only use them for testing things out, but they are perfect for that, as I wanted to go for that art style anyway.

So, each terrain type has a bunch of 32×32 images that represent the terrain in its entirety, or transition tiles. I was too lazy to search online if there’s any rhyme or reason to the naming of those files, so I did the natural thing: run some batched image processing using ImageMagick to identify the tileable images.

Step 1: Find out the seamless tiles in all directions

To find out which images tile with themselves, I ran a tiling scipt. A python call for imagemagick commandline looks like this:

"magick montage {}-geometry +0+0 {}".format( (file_in + " ") * numrepeats * numrepeats, file_out)

where file_in is the input 32×32 file, file_out is the output tiled file, and numrepeats is the number of tile repeats in each axis.

Results look like this:

Good tile

Bad tile (it’s a transition tile)

So, great, now I have a list of tiles that are seamless with themselves. But, would they be seamless with other tiles?

Step 2: Add labels to tiles

Files are of the form “watrtl14.png”, “tgrd023.png”, etc. So, a prefix for the terrain type, and a number of the id. So, next step is to create images with a label in the middle of the image displaying the tile id:

"magick convert {} -gravity center -annotate +0+0 {} {}".format(file, label, file_out)

Result is like this:

 

Step 3: Montage of different, labeled tiles

So now here comes the fun part. As we have a version of all the images labeled, I run a montage again as in step 1, but with the following changes:

  • Use the labeled images as the base tiles
  • If we have 20 images for a terrain, I sample from this set randomly to populate a 12×12 tile grid. This will show what doesn’t match with what else! Here’s an example

All tiles match well with each other!

 

There are some tile IDs that are darker compared to their neighbours. Reading the labels, I can quickly identify them: 17,22,23,24,25

Step 4: Assemble the texture array

Now I select which image set will be used for which biome, e.g. the water images are used for the water biomes (4 of them), by creating variants that are slightly processed in terms of histogram/levels. For each biome, I select a random subset of 16 images. As I have 16 biomes, I end up with 256 image.

So, after a bit of work, the resulting texture looks like this:

Well, in reality I’m using a source image of 32×8192 which gets interpreted as a texture array of 256 slices, so that I don’t have to write manual code for correct mipmapping in the texture atlas. From a quick performance test, there didn’t seem to be much difference.

So that’s all for how we create the detail texture atlas! Now onto applying it. There’s not much to write about how to apply it, as I’m just sampling the texture instead of using  a constant color. So, here’s a before/after comparison:

For the observant readers: there’s some slight artifact at the tile borders in the above images (and the video below) – this was some incorrect fract() operator on the UV coordinates, this has now been fixed. Here’s the associated video in all its animated glory:

The video shows before-and-after the detail textures, the coastal animation, scrolling and zooming in/out on the map.  For zoom in/out, I’m using texture filtering like this:

  • min filter: linear mipmap linear (to prevent noise at zoomed out level)
  • mag filter: nearest (I still want it to look pixely when you zoom in)
  • wrap mode: repeat ( it’s a texture array, so the filtering is taken care of automatically)

Nevertheless, I found some other resource for future reference for manual filtering/mipmapping if at some point I have to use a regular atlas rather than a texture array.

Coastal waves animation

So that’s a nice-looking gimmick, but I’m going to write a bit about how it was implemented anyway.

Remember, instead of storing bitmasks for the biome transitions, we’re storing distance fields to the boundary. When rendering, I process the layers one by one, and I’m blending the biome colours. It helps that the seacoast is biome type 0 and the water biomes are types 1,2,3 or 4. So, if there’s coast, it will always be the biome type in layer index 0. And if there’s water, it will be right after. So, I need the following conditions to be true:

  • current layer is a water layer
  • first layer is coast
  • pixel distance field value from boundary of current layer is negative (pixel is within the mask, ie. our “current” biome)

We need to have recorded the distance field value of the biome in layer index 1, regardless of the layer we’re on. This ensures that this is the distance of the first water layer to the coast, which is what we need (the “to the coast” is the crucial bit, as the distance field records values against the previous layer, and we don’t want abyssal sea distance to deep sea, if coast, deep sea and abyssal sea are layers on the same tile)

So, now that we have these, we need to compute the waves. For the waves, I’ll let code do the talking, as it is noise, domain warping, and the usual:

float t = g_TotalTime*3;
t += 4.1*( snoise2(var_actual_pos*1)*0.5 + 0.5);
float cmpDist = 0.45 + 0.02*(sin(t)*0.5 + 0.5);
cmpDist = 0.41;
if ( layer1dist > cmpDist)
{
    vec3 coastal_water_color = vec3(0.7,0.95,1.0);
    
    // Put crests at certain distances
    float distFromBoundary = layer1dist;
    float phase = -0.5*t + 2.0*( snoise2(var_actual_pos*10)*0.5 + 0.5) ;
    float dmin = cmpDist;
    float dmax = 0.5;
    float dn = (clamp(distFromBoundary,dmin,dmax) - dmin) / (dmax-dmin);
    float mixFactor = pow( sin(phase + dn*11.0)*0.5 + 0.5, 4.0); // sharpen the result with pow. adjust the phase with time    
    mixFactor *= smoothstep( -0.4, 0.4, snoise3( vec3(var_actual_pos*2.0 + vec2(1000),t*0.05)));
    
    mixFactor *= dn; // smoothly fade out the wave at the boundary distance
    curcolor.xyz = mix( curcolor.xyz, coastal_water_color.xyz, mixFactor);
}

On note about the cmpDist variable. 0.5 is exactly at the boundary ( I encode a signed distance field in [0,1]), and a value slightly away from the coast would be around 0.45.

Next time I’ll try my luck with prop placement, and I’ll see if I can extract any sample props from that HoMM resource again for test use. But I might actually stop soon with the art, as I think now it should be good enough.

Autotiling Adventures Part II: Procedural masks for biomes and rivers

Biome masks

 

In the last article, I described a way to autotile multiple biomes using a minimal set of mask shapes. I used a custom map for testing. This time, I use some shaders to generate the a nice big set of masks. In particular, I can generate for example 32 variations of each of the 4 shapes at 256×256 resolution. As we have 1 shape per RGBA texture component (our masks are grayscale), we need 32 RGBA textures, or a single 32-slice array. Stiching them up, the procedural masks look like this: (rows: variations, columns: shapes)

 

These masks are generated using perlin noise, and then they are post-processed to remove floating islands. Here’s how:

  • We know that each shape contains 1 or 2 white regions and always 1 black region
  • Detect all the black regions, sort them by area, and replace all but the largest with white (so we satisfy “always 1 black region” criterion)
  • Detect all white regions, sort them by area, and replace all but the largest 1 (or 2) with black (so we satisfy “always 1 or 2 white region” criterion

Here are the steps visually: left is the original image, middle is with extraneous black areas removed, right is the final, with the extraneous white areas removed:

 

At this stage, we calculate the distance field for each of the masks. The distance field is 256×256 at this point. The maximum distance in the distance field is the length of the diagonal diag = 256*sqrt(2); we normalize the values in the distance field from (-diag,diag) to (0,1), to be resolution independent. We now downsample the distance field to 32×32, so that it can still reconstruct the shape nicely. The data is stored in an RGBA8 texture. If each variation is an array slice, we end up with a texture array 32x32xN. To give some perspective, for 64 variations we need 32*32*64*4 bytes = 256K of memory, which is very little. Add a bit of extra for the mipmaps (which are good for filtering when zooming out further), and we’re settled with the biome masks.

Rendering the biome masks

Last time I described a way to render the masked, by rendering a subset of tiles per layer. This is far from optimal (it was approach v1 after all). So, here’s a better one:

  • We observe that evert tile has to be rendered (duh). That means, we need a dense 2D data structure, with tile data per element. So, tile positions are now implicit.
  • We observe that we have up to 4 layers per tile. The info that we need per layer is the layer index (4 bits), the mask index (3 bits) and the transform index (3 bits). That makes 10 bits per layer, so 40 bits in total. So we place the data in a 64-bit data structure ( e.g. RGBA16 or RG32) and we have 22 bits to spare.

Now we render the visible grid, and we sample this data structure to reconstruct the mask. The pseudocode is roughly as follows:

for each pixel:
  calculate tile index and offset in tile
  shift output position by half a tile // for corner offset
  sample autotile data based on tile index
  set output color as 0
  for each valid layer
    transform tile offset using layer transform
    sample mask using transformed coordinates and mask index
    calculate color based on layer
    blend output color with current color based on mask value

 

River (and road) masks

River masks are slightly different to biome masks and have the following characteristics:

  • The tiles where we need river masks are few: for my map, it was 1.5% of the total tiles.
  • It is not beneficial any more to use corner offsets.
  • There is no diagonal river connection.
  • All river tiles connect to at least one river tile.
  • There is always a source/origin tile for rivers. The origin tile is always connected to one other tile.

Given the above, we realize that we really need 5 different masks: origin, line, corner, t-junction and cross. Below is a list of examples:

We follow the same process as with the biome masks: we remove extraneous white/black regions, calculate distance fields and downsample to 32×32.

Here’s also a video that demonstrates all the mask shapes, procedurally generated, parameterized by time:

As you can see, for the river masks there are typically big black holes in the middle, but they are filled out by the process I described earlier.

River and road rendering

The process is a bit different to the biome mask rendering. Now we have a sparse set of tiles that contain river/road data. The tile data required are 3 bits for the mask index, 3 bits for the transform, and 10 bits for each of the x,y coordinates of the tile (I’m using 512×512 maps for the overworld and I doubt I’d use 2048 or larger). The pseudocode for rendering is similar and a bit simpler compared to the biome mask rendering: for a tile, we unpack the autotile data, we calculate the output position based on the x,y values, and we sample the mask using transformed coordinates.

The river/road render passes take place immediately after the biome mask pass, as they use a sparse tile renderer (biome masks rendering uses a dense tile renderer) and they don’t use corner offsets.

Putting it all together

 

Here’s a video that shows all mask on the map, compared to the single color per pixel:

As a note, the original single-color-per-pixel has some additional color variation based on the vegetation density, that the new masked version does not have have yet. Also, I think there’s an indexing bug for the variations, as I should have 64 different variations per shape but we can see the occasional repetition.

TODOs for next are coastal water animation utilising the distance field, color variation of the biomes (they still look quite flat) and prop locations per tile for placement of trees, etc.

Autotiling adventures

So, we have a procedurally generated biome map, where each pixel is an individual biome. If we zoom in, it’s obviously quite pixelated. If we add sprites, it just doesn’t look right (besides the lighting and colors)

We have reasonably detailed sprites on a single-colored squares. It’s just ugly. We need to add some texture/detail.

Auto-tiling

 

Enter auto-tiling (or transition tiles): a method to automatically place tiles with texture so that each tile matches perfectly with their neighbours. It’s a bit of a wild west out there in terms of approaches, so here are some resources (or resource lists) that I found useful:

https://pixelation.org/index.php?topic=9865.msg107117#msg107117
https://gamedevelopment.tutsplus.com/tutorials/how-to-use-tile-bitmasking-to-auto-tile-your-level-layouts–cms-25673
http://www.cr31.co.uk/stagecast/wang/blob.html
https://www.codeproject.com/Articles/106884/Implementing-Auto-tiling-Functionality-in-a-Tile-M

https://gamedev.stackexchange.com/questions/32583/map-tile-terrain-transitions-with-3-4-different-types
https://www.gamedev.net/forums/topic/606520-tile-transitions/
https://gamedev.stackexchange.com/questions/26152/how-can-i-design-good-continuous-seamless-tiles
http://allacrost.sourceforge.net/wiki/index.php/Video_Engine_Visual_Effects#Transition_Tiles
https://web.archive.org/web/20161023185925/http://www.saltgames.com/article/awareTiles/
https://www.gamedev.net/search/?q=tile%20transitions&fromCSE=1#gsc.tab=0&gsc.q=tile%20transitions&gsc.page=1
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/tilemap-based-game-techniques-handling-terrai-r934/
http://www-cs-students.stanford.edu/~amitp/gameprog.html#tiles
http://playtechs.blogspot.co.uk/2007/04/tileset-design-tip.html
https://opengameart.org/forumtopic/auto-tiling-standardization
https://forums.tigsource.com/index.php?topic=21237.0
http://www.pascalgamedevelopment.com/showthread.php?22862-Tilemaps-and-autotiles
https://gamedev.stackexchange.com/questions/125284/what-is-the-term-for-a-3×3-tile-set-used-to-create-larger-areas/125285
https://forum.unity.com/threads/2d-tile-mapping-in-unity-demystified.441444/

Quite a few.

There are two main ways to consider art when using autotiles: using masks, or using premade textures. A good example is shown here:

Autotiling example

https://i1.wp.com/allacrost.sourceforge.net/wiki/images/1/19/Autotileex.gif?w=630

Blend masks example

https://i0.wp.com/allacrost.sourceforge.net/wiki/images/3/34/Blendmaskex.gif?w=630

The premade tiles have the obvious benefit that they can be very nicely done, but of course they are tied to the content they represent.  The blend masks do not look as good, but are easier to develop, and they are more flexible in terms of what textures we want to seamlessly mix. I decided to use masks as I want transitions between any biome: for 16 biome types, that’s 120 unique combinations. It’s not an option to ask an artist to develop 120 different autotiles, that needs quite a bit of money and time. And also, that would have no variation; each autotile would be replicated all over the place, so it would be easy to distinguish patterns.

 

Grid shifting

The first naive thought that comes to mind (and I went with it for a while actually) is “ok, we have a tile, it is neighbour to 4 or 8 other tiles, so generate masks according to that relationship”. Example here. As one can see, the 4-connected version is less interesting than the 8-connected version (and we don’t want less-interesting), but the 8-connected version results in a lot of combinations! So what do we do? Well, we shift the grid. This way, we always have 4 potentially different tiles (quarters of them anyway)

Below, we shift the whole grid top-left by half a tile. Now, each grid cell (red) always contains parts of 4 tiles.

While this is mentioned in a few articles, it’s demonstrated perfectly in a non-technical article, here. That’s what sold me, as I find the results amazing!

 

Reducing unique combinations

So, that’s what I mostly found so far in reference material. Now, a 2×2 grid as described can contain 4 different biomes. That’s 4 bits, therefore 16 possible total combinations/arrangements. Here’s how they look like (source):

 

In the “16 most basic tiles” above, we can observe the following:

  • No 16 can be expressed by transforming 15 (180 deg rotation)
  • No 11,13,14 can be expressed by transforming 10 ( 90,180, 270 deg rotation)
  • No 3,9,7 can be expressed by transforming 1 ( 90,180, 270 deg rotation)
  • No 2,6,8 can be expressed by transforming 4 ( 90,180, 270 deg rotation)
  • No 5,12 contains no spatially varying data

This implies that the only unique tiles are 1,4,10,15,5,12. Furthermore, the only unique tiles with spatially varying data are 1,4,10,15. So, that is 4 tiles instead of 16. We can arrange such a mask of 4 tiles like this:

This has a nice continuous shape, if for example we want to ask an artist to draw some of those. Note that with this arrangement, the transformation will differ, as now the masks are already transformed compared to what I showed above. What’s really important is that the amount of white vs black at the borders that contain both needs to always match, so that tiles are seamlessly combined. In my case above, I’m splitting them at 50%, but that’s of course configurable. What I’m not going to cover, as I’ve given it some thought and gets very complicated, is to support variable black/white border percentages, ensuring that they match There are many more complications involved and I’m not sure if it’s worth it in the end.

So, now we have 4 unique combinations. These can be nicely stored in an RGBA texture (one mask per channel) by converting the above 1×4 tile image. In the shader, given a mask value in [0,15], we effectively do the following:

mask = ... // obtain mask value from 4 underlying biome map tiles. Value in [0,15]
(mask_unique, transform) = get_transform(mask); // use a lookup table to get the unique mask index [0,3] and the transform needed
uv2 = apply_transform(uv, transform); // transform the texture coordinates
mask_rgba = sample_mask_texture(uv2); // sample the mask values
mask_value = -1;
switch(mask_unique)
{

    case 4:  mask_value = 0; break; // whole mask is empty
    case 5:  mask_value = 1; break; // whole mask is full
    default: mask_value = mask_rgba[mask_unique]; // get the component that we need
}

Most of the above can be done in the vertex shader, whereas the last two steps (sampling the texture and getting the component) need to be done in the pixel shader. So, it’s quite cheap.

Rendering tiles

So, we have a method to render tiles given a very small number of masks. How do we render the tiles? Here’s the naive approach, for a 512×512 biome map:

  • We have 16 biome layers, so I assign each a priority. E.g. shallow water covers coast. Water covers shallow water and coast. Deep water covers water, shallow water and coast. And so on.
  • For each layer, we generate tile render data as follows:
    • For each tile corner in the biome (513×513 grid of points)
      • Sample the 4 adjacent biome types (clamp to border if out of bounds)
      • Create the mask where we set 1 if the layer is equal or higher priority than current, or 0 if the layer is of lower priority than current
      • Based on the mask value, calculate unique mask index and transform, and store in this tile’s render data

So, now we have a list of 513x513x16 tiles = 4.21 million. That’s quite a lot. But as I said, that’s the naive version. Observations:

  • When the unique mask index corresponds to constant 0 (mask_unique index == 4), we don’t need to consider the tile for rendering.
  • When all of the four biome values in a tile are of higher priority than the  current layer, this means that the tile will be completely covered by higher priority layer tiles, and therefore we don’t need to render it.

By applying these two, for my test map I reduced the number of tiles to 0.4 million, which is 10x better. Of course, that’s still a lot, but it doesn’t take into account any spatial hierarchy and other optimisations that could be done.

Here are some examples using the above un-nice mask. Zoomed-out:

Zoomed-in

Ok, so my mask looks bad, and there’s little to none variation, so you can see patterns everywhere.

Increasing variation

Using 256×256 masks, a single RGBA texture needs 256K of memory. We can have a texture array of such masks, using however many layers we can afford memory-wise. In runtime, we can select the layer based on various conditions. E.g. some texture layers could contain transition masks for particular biomes, or more generally, we can select a layer based on a function of the tile coordinates.

Next…

Next post will be about procedurally-generating lots of masks, using distance fields versus using binary masks, and also determination of locations for placement of props.

Shader variables

Since the game will utilize graphics quite a bit in the style of old SNES-era games (multiple layers, lots of sprites), that means using a rendering engine which is above trivial level. Additionally, since much of rendering will be based on procedural techniques, that means lots of shaders. Lots of shaders requires configurability of said shaders using uniform variables. And this is the topic of this post.

A shader variable (ShaderVar) is an abstraction for such uniform variables. The abstraction allows manipulation of the value via ImGui (using optional minmax ranges for integer/float variables and vectors) and updating the values in the OpenGL state. These variable abstractions can also be used to solve the problem of automatic binding of textures, as in OpenGL it can be a bit of a pain to manage. Finally, we can add stat gathering functionality to identify at a rendercall if there are any variables which haven’t been set, which can be quite useful for debugging. A brief overview is the following:

Effect loading. Inspect loaded effect (program) for uniform variables that are used by the shaders. Make two lists: one for textures/buffers and one for other values. The index in the sampler list is set as the texture unit location that we should be binding any texture or sampler

ShaderVars class.  An abstraction for a group of ShaderVar objects. Each object has a name and value, and a uniform location for an arbitrary number of effects. That means that we can do the following:

SetShaderVar<float>( shaderVars, "g_Speed", 0.5f); // Set the value 0.5f to the variable g_Speed
...
UpdateShaderVars( shaderVars, fx1);// If g_Speed exists in fx1, it's set as 0.5f
...
UpdateShaderVars( shaderVars, fx2);// If g_Speed exists in fx2, it's set as 0.5f

It’s not really complicated underneath, but it serves as a nice abstraction to not deal with strings in the underlying implementations, as we’re dealing directly with uniform locations and vectors of such locations. At the moment I’m using strings for setting values, but this can (and will) be changed to use other forms, such as properties

Global and local ShaderVars. When we’re about to render, we can update the shader using several such blocks. For example, one block could be globals for the whole application (window width,height), others could be globals for the current frame (current time) or also more specific, such as common values for overworld rendering ( The grid section that is currently in view, etc). These globals can be stored in the registry and fetched using a handle. After the globals are set, we can update the effect using any local shader variables. In case of a clash, we override with the most local version of the variable. Such overwrites can also be detected, warning for any misuse of the system.

Here’s how a few sections look like in the config files:

// Some shadervar blocks
"ShaderVars" : [
    { "GlobalPerApplication" : {
        "@factory" : "ShaderVarsSeparate",
        "ShaderVars" : [ 
        ]
    }},
    { "GlobalPerFrame" : {
        "@factory" : "ShaderVarsSeparate",
        "ShaderVars" : [ 
            {"Name" : "g_TotalTime", "@factory" : "ShaderVarFloat"}
        ]
    }},
    { "GlobalOverworld" : {
        "@factory" : "ShaderVarsSeparate",
        "ShaderVars" : [ 
            {"Name" : "g_HeightScale", "@factory" : "ShaderVarFloat", "Values" : [0.0], "Min" : 0, "Max" : 4},
            {"Name" : "g_BiomeMap", "@factory" : "ShaderVarTextureStatic", "Values" : ["biome"]},
            {"Name" : "g_SpriteOffsetY", "@factory" : "ShaderVarFloat", "Values" : [0.5], "Min" : 0, "Max" : 1},
            {"Name" : "g_TileMapRects", "@factory" : "ShaderVarTextureBufferStatic", "Values" : ["dcss_rects"]},
            {"Name" : "g_TileMap", "@factory" : "ShaderVarTextureStatic", "Values" : ["dcss"]},
            {"Name" : "g_ResourcesMap", "@factory" : "ShaderVarTextureStatic", "Values" : ["resources"]}
        ]
    }},
    { "GlobalFlashing" : {
        "@factory" : "ShaderVarsSeparate",
        "ShaderVars" : [ 
            {"Name" : "g_FlashMinIntensity", "@factory" : "ShaderVarFloat", "Values" : [0.5], "Min" : 0, "Max" : 1},
            {"Name" : "g_FlashMaxIntensity", "@factory" : "ShaderVarFloat", "Values" : [1.0], "Min" : 0, "Max" : 1},
            {"Name" : "g_FlashPeriod", "@factory" : "ShaderVarFloat", "Values" : [2.0], "Min" : 0, "Max" : 5}
        ]
    }}
],
... 
// Some renderers. They can use shadervar blocks
{"OverworldDense" : {
    "@factory" : "RendererGrid2Dense",
    "Fx" : "OverworldDense",
    "ShaderVars" : ["GlobalPerFrame", "GlobalOverworld"],
    "DepthTest" : true
}},
{"GridSparseHighlight" : {
    "@factory" : "RendererGrid2Sparse",
    "Fx" : "GridSparseHighlight",
    "TextureSamplers" : { "g_TileMap" : "nearest_clamp" },
    "ShaderVars" : ["GlobalPerFrame", "GlobalFlashing","GlobalOverworld"],
    "DepthTest" : true
}},
....
// A renderable. They can use local shadervar blocks
{ "griddense" : { 
    "@factory" : "RenderableTileGrid2Widget",
    "Renderer" : "OverworldDense",
    "ShaderVars" : {
        "@factory" : "ShaderVarsSeparate",
        "ShaderVars" : [
            {"Name" : "g_Color", "@factory" : "ShaderVarColor", "Values" : [[255,255,255,255]]}
        ]
    }
}},

Note: The reason I’m using an additional ShaderVars abstraction is because in the future I want to consider having uniform buffer objects for many shader variable blocks, as it’s more optimal. But of course, this will only happen when the slowdowns begin, which is not now.

So, that’s it for this time. I’m also currently toying with introducing framebuffer objects in the system (so that renderers and renderables can be configured via script to render to an offscreen surface) so that we can have more flexible render paths. And also what’s coming is an autotiling implementation, using all these.

Automatic pixel art from photos

Disclaimer: Properly authored pixel art is awesome. Automated pixel art is fast food: great when you don’t have enough money (to hire) or time (to author). And does the trick when you’re starving.

I’m not an artist, I love pixel art, and frequently I want something here and now. So, how do I get copious amounts of pixel art without bugging a pixel artist or becoming one myself? Software of course. The style that I’m after is retro 90’s look: slightly pixelated and with a limited, painterly color palette. A few examples:

Old game art, fantastic colours, painterly look:

Pixel art, great mood and selection of colours

 

Great tile design (sprites are sometimes too “cute” for me unfortunately) and great colours. I bought them as I love them! 🙂

So, while I don’t hope to automatically generate stuff of quality like the above from photos, I made a tool to convert landscape photos to pixel-art style.  There are two components to the process:

  • Palettization (Mapping full colour range to a limited palette)
  • Pixelation (Downscaling the image to look a bit retro)

My approach is quite simple, and is as follows:

  • Load the source image
  • Select a color difference function (I used code from here)
  • Convert image pixels to color space used by the difference function
  • Select a palette. I got some from here. Additionally, I got all the unique colors in Oryx tileset and made a palette out of them too (the largest palette: about 1000 colors)
  • Convert palette pixels to color space used by the difference function
  • For each pixel, select the palette entry that minimizes the color difference between itself and the source pixel.
  • Downscale the image by a factor. For each block of pixels NxN that corresponds to 1×1 pixel in the downscaled image, fetch the palette color that appears the most times.

And that’s it! So, I tried the above on a few images (found in google, none of them is mine), and I got … very mixed results. Below I’ll show the original image and a few good/bad/quirky results.

Mountain/River

Cie94 w/ Oryx, downscaled 2x. Good

Cie2000 w/ famicube, downscaled 2x. Not good

Cie94, GraphicArts w/ psygnosia, downscaled 2x. Lo-spec but good!

Atlantis

Cie94 GraphicArts w/ Oryx, downscaled 2x, good

Cie94 GraphicArts w/ famicube, downscaled 2x, good. Sharks are a bit of a problem as the sea water bleeds in

Cie1976 w/ Endesga-16, downscaled 2x, bad.

Euclidean distance w/ Oryx, downscaled 2x, bad

River

Cie2000 w/ oryx, downscaled just 1x, it’s way too realistic.

Cie94 GraphicArts w/ Oryx, downscaled 3x, a bit better, but still a bit realistic

Cie2000 GraphicArts w/ Endesga-32, downscaled 3x. Not as realistic, but a bit worse quality.

Castle

Cie1976 w/ oryx, downscaled 1x. A bit too realistic

Cie1976 w/ famicube, downscaled 1x. A bit too damaged and noisy.

Mexico

Cie2000 w/ oryx, downscaled 1x. A bit too realistic. Additional downscale would destroy the geometrical details.

Cie2000 w/ famicube, not good.

Cie94 Textiles w/ psygnosia, quite bad.

Underwater ruins

Cie2000 w/ oryx, downscaled 2x. This looks great! Good for a change.

Cie2000 w/ famicube, not so great.

Cie2000 w/ psygnosia, not great either. the water is gray and the shark is bluish. Well … no.

Sahara

Cie2000 w/ oryx, doable

Cie2000 w/ endesga, quite bad, but at least is good in making the JPG artifacts very very visible.

Cie2000 w/ psygnosia, not that bad actually! Even if quite lo-spec.

Yucatan

Cie94 Textiles, w/ aap64, downscaled 3x. A bit too damaged, but I like it

Cie2000 w/ oryx, downscaled 3x. It’s good, but a bit too realistic

 

So, the experiment was a failure, but I learned a few things from it:

  • Most important: The visual appeal of the results greatly depends on the colours used in the original. A grayish brown image won’t magically transform to colourful, just because the target palette is. And a simple color distance doesn’t solve the issue. We need a more sophisticated color transfer
  • Distinguish between surface texture and geometric silhouettes: surface texture colours need to be somewhat flattened, while silhouettes need to be preserved
    • could use a bilateral filter, and edge detection
  • Consider dithering. Can reduce color error, but do we want that? It certainly helps with the blotches/banding.
  • When using a palette with lots of colours, doesn’t mean we should strive to use all of them. The color distance metric tries to preserve the original colours, which would be realistic. We don’t want that.
  • Pick the brains of pixel artists for their approach (Duh)
  • Use high quality images, with minimal JPG compression artifacts. (Duh, but I was too lazy for this one)
  • Use Photoshop/GIMP/etc. The more sophisticated the algorithm gets, the more tedious it is to write/update a custom tool to do that.

Front-end: Camera

We currently have overworld maps that can be visualized with a color per pixel, depending on the pixel’s biome data.
In the actual game, each of these pixels will be a tile, on which we’ll map textures, and we can also even put geometry. There are many ways to view such maps, for example various configurations of perspective and orthographic projections.

So far I’ve been using a top-down view for the ease of implementation. As I wanted to experiment with several visualizations, I changed the implementation to support a fully configurable camera, as well as tile highlighting and edge scrolling as before. Here are some examples of an overworld map, visualized with a few cameras (orthographic top-down and 3 perspective cameras, zoomed in and out) :

The art that I’m going to use will possibly be just 2d sprites. The beauty of 2d sprites is that they can be used both in 2D and 3D views, in the latter, as billboards. The challenge is to make them look good when integrated in a 3D environment. Here are some examples of a basic, naive integration of some DCSS tiles in the map:

So, these are the tiles rendered to the top-down view – nothing special so far. Below, we try the billboard rendering, before and after, using a side-camera:

When using such cameras, it’s clear that sprites look much better as billboards rather than flat mapped onto the terrain. Some more billboard examples using more cameras (1 isometric and 3 perspective):

(Note: Any sprite overlaps are due to my laziness; the positions for the billboards are generated randomly and therefore sometimes overlap)

I think it’s safe to assume that it works well, but they don’t integrate perfectly. I’m not an artist, so I need to get some informed opinions on how to improve the visual result by just processing the sprites and data that I have (e.g. using specific, shared color palette for both overworld and sprites, process sprite boundary, add some fake environment lighting, etc). Also, DCSS sprites occasionally have integrated shadows, which funnily enough works for many of the views, but it’s baked in the sprite, so not really controllable.

Here is a video of a perspective/ortho cameras using billboard version, to showcase the 3D look.

While the 3D effect works, the visuals can be massively improved. For example:

  • Actual textures for the overworld, including rivers and sea (tileset hunting plus procedural should do it for now)
  • Vegetation and mountain sprites on the tiles (I need tilesets for that)
  • better sprite integration with the map (shadows? silhouettes? env lighting?)
  • Optionally, a basic elevation visualization, with lighting (mutually exclusive with mountain sprites, but this is easy to do)

Of course I’ll need an artist for sprites and textures, so at the moment I have to make do with my mad (bad?) pcg skills. And for even later, there’s also animated fog of war, particle systems and other bells and whistles, because graphics programming is fun 🙂

Generating overworld resources

Cities and civilizations need resources to survive and thrive. Our case is no different. So, before placing any cities, we need to generate the resources that the world uses. I’ve decided to split the resources into 3 groups: food, basic materials and rare materials: the first two are found in varying quantities pretty much everywhere. Civilizations can immediately use such resources. Rare materials, on the other hand, are not easily found, they need to be discovered first, and they also need to be mined. On the plus side, there will be enough incentive to explore, discover and mine such materials (wealth, influence, advanced structures and items, etc).

Tile resources

From a macro point of view, each tile of the overworld has a the following resource information:

  • Food: Used to feed population. Obtained from sources such as vegatation and wildlife. Value: [0,255]
  • Basic materials: Used for buildings and construction of everyday items. Obtained from environment. Encompasses materials such as stone/leather/wood/iron. Value in [0,255]
  • Rare materials: Special, difficult to find/mine materials, used for magic and/or construction of advanced buildings/items. Examples include silver, gold, mithril, crystal, gems, etc.  A value of [0,1] per rare material type.

So, each tile resource can be represented with a 32-bit value: 8 bits for food, 8 bits for basic materials and 16 bits for rare materials (for a maximum of 16 rare materials). Several rare materials can co-exist at the same tile.

Rare material traits

A rare material, with regards to generation of a map that contains them, has the following main traits:

  • BiomeSuitableRange: This is a collection of ranges per biome parameter, e.g. temperature, humidity, elevation, water type, etc. So, for example, some materials can be spawned only in high altitude areas of extreme cold, etc.
  • Chance: This is the chance of a rare material spawning at a suitable tile. So, the effective spawning chance is chance multiplied by the probability of a tile being suitable.

Tile resources map generation

In order to generate this “tile resources” map, we need to have calculated the biome map first.

The first step in the process is to calculate all the candidate tiles per rare resource type. At this stage, we also calculate the food and materials per tile, as a function of the biome. I’m currently using an extremely naive mapping of wildlife density to food and vegetation to materials, but that should change later on.

We then shuffle the candidate list and pick the N first points, where N = max(chance * totalCandidateNum, min(1,totalCandidateNum)). So, if we have no candidates, we won’t generate any. If we have at least 1 candidate, we should generate at least one point. And that’s it, really! Pretty simple, but does the job. Here’s an example of a rare material’s distribution; there only tens of them in the whole map, so it could be a quite coveted material to be able to mine and get access to.

Overworld map generation

My goal is to generate an overworld map, where each tile would cover an area of about a hundred square km (On normal terrain, a regular unit would need a day to cross a regular tile). The overworld needs to contain islands, continents and biomes. The output of this process is a 2D “image”, with data per pixel (32 bits, like an RGBA PNG file) that completely describe how is the environment of a tile like. I’m going for plausible rather than realistic: I want to be able to create maps that are fun to play. Below, I’m going to go through the various steps of the process that I use.  All but the landmass labeling and river generation passes are generated using the GPU, as the calculations are typically parallel. The whole process takes about 60 milliseconds for 512×512 maps, so we can tinker all sorts of parameters and see the results in real-time.

Continent mask

The first step is the creation of the seed continents. These are not necessarily the final continents, but they help construct the base for the big landmasses. The continents start off as a small set of scaled and rotated ellipses. Everything about these ellipses is randomized: number, scale, rotation, eccentricity.

The next step is to distort the boundary of the ellipse using perlin noise. Effectively, we’re warping the point we’re on before testing whether it’s inside or outside one of the ellipses. There are two parameters for this: warp frequency (how much can the warp differ between 2 adjacent pixels) and warp magnitude (how far the warped point can get from the original). Some examples of increasing frequency:





For the rest of the post, let’s choose the one before last. At the end of this stage, we have a map that stores if we’re inside or outside big continent-like landmasses

Continent mask distance field

This step calculates a limited distance field around the coastline of the continents: this will be useful for the actual heightmap generation. We calculate distances from the coastline (d = 0) up to a number of pixels away from it (e.g. d = 32) and we map the values 0-1 to this clamped distance range.

Heightmap

This step calculates an 8-bit heightmap with values [-1,1], positive numbers representing land. We don’t care about it looking too realistic, as the heightmap will only be used implicitly, as an input to other parts of the generator.

Landmass mask

This step creates the final landmasses. We’re just using the heightmap to generate this, comparing the height values against 0.

Landmass distance field

This step does the exact same process as the continent mask distance field, but on the landmass mask.

Landmass labeling

This step does a floodfill over the heightmap, detects landmasses, classifies them in terms of size (rocks, islets, islands and continents) given user-defined area thresholds. There can be a maximum of 63 continents given the current bit budget, but of course that’s flexible. The continents are also uniquely labeled at this step (this means that all the tiles that belong in continent 2, store the value 2 somewhere — see below, Biome data section). Additionally, bodies of water that are completely enclosed by landmasses are marked as part of the landmass, so that they can correctly be identified as lakes later on.

Rivers

This step generates the rivers in the overworld. Effectively, give some parameters such as minimum river proximity to each other and river min/max length, we generate rivers. The way this is done is by sampling random points on the map and testing if they can be appropriate starting locations (e.g. on or by a mountain). If a point satisfies the conditions, then a path is attempted to be generated, with branching; the path follows a downward path in terms of heights till it reaches a lake, the sea, reaches maximum length, or can’t go further due to any reason. Below two examples with different density:

Humidity

This step generates the humidity for each tile. It takes into account outline, heights and freshwater. The basic map is calculated with perlin noise, but it is also adjusted based on if a tile is water or land: areas in and near water are more humid. It is also affected by the the freshwater mask, which gets heavily blurred and added as extra humidity; this guarantees that there almost never are rivers in the desert, or swamps without a body of water nearby.

Temperature

This step generates the temperature for each tile. It takes into account outline, heights and freshwater as well. The basic map is calculated with perlin noise, but it is also adjusted based on if a tile is water or land: when on land, we sample from a heavily blurred heightmap and reduce the temperature based on that regional average height. This reduces temperatures in regions where there are a lot of high mountains. Additionally, the regional existence of water reduces temperatures a bit.

Biome data

At this point, we’re almost done! This step samples all maps and packs them into a 32-bit output map. These 32 bits encode the biome detail in a coarse way.

Here’s the breakdown:

  • Temperature: 3 bits
  • Humidity: 3 bits
  • Elevation: 2 bits //  Height or depth, dep. on water type
  • Water type:  2 bits // none, river, lake, sea
  • IsCoast: 1 bit
  • Vegetation density: 3 bits
  • Wildlife density: 3 bits
  • Continent ID: 6 bits
  • Landmass size: 2 bits
  • Biome type: 4 bits // one of the 16 predefined biomes
  • Padding: 3 bits

For many of the above (temperature, humidity, elevation), we quantize the (typically 8-bit) data that we already have to the bits above. The biome type is calculated from the rest of the values (temperature, humidity, etc), and is one of the following:

Sea Coast, Shallow Water, Sea, Deep Sea, Abyssal Sea, Tundra, Alpine, Desert, Boreal Forest, Temperate Rainforest, Tropical Rainforest, Temperate Deciduous Forest, Tropical Seasonal Forest, Temperate Grassland, Savannah, Wetland

Some of the values are calculated in this step:

  • WaterType: Calculate based on if it’s a river tile, landmass ID and height.
  • IsCoast: Calculate based on if we’re on land, and sample all neighbours for any sea tile
  • Vegetation density: More perlin noise, adjusted by humidity, temperature and height
  • Wildlife density: More perlin noise, adjusted by humidity, temperature, height, freshwater and vegetation

Here’s a visualization of the vegetation density:

… and the wildlife density:

Depending on the biome type we can distribute flora, fauna, resources, civilisations, etc.

Here’s a video of the GUI tool in action!

Other examples here:

Closing notes

The format might get adjusted in the future, in order to use those padding bits to encode some extra information, for example freshwater direction in river tiles (2 bits). There is also a dynamic magic map which specifies, in areas of high magic, special happenings such as rifts, summons, portals, etc. Additionally, there’s tile resource generation which will be covered next time.

More messaging and shader parameterisation

Last time I gave a brief description about how messaging (and my dirt simple implementation) can help with decoupling. But of course that was just scratching the surface. So, in this post, a bit more information on how the whole system is put together

Messaging changes

The messages now can also store an explicit message handler. In terms of the example I used last time, the new message would be as follows:

class cEntity;

struct cEntityCreated : public cMessage
{
    explicit cEntityCreated( const cEntity& zEntity, const cMessageHandler * handler = nullptr)
    :cMessage(msType,handler),mEntity(zEntity){}

    const cEntity& mEntity;

    static const int msType = 2;
};

So, a slight change allows cases where we’d like to target a message to a particular handler. This would be useful in the cases where we want to directly affect something from another part in the code that we don’t want coupling with, but we don’t want to introduce abstraction layers. Example:

My test rendering app needs to modify a renderable directly, by setting a bunch of tiles. One option is to introduce a new message, TilesChangedInRenderable( tiles, renderable), but then we have a TilesChanged(tiles) message AND a TilesChangedInRenderable(tiles, Renderable). To avoid doing the same thing with classes other than Renderables, and since the Renderable is a MessageHandler anyway, I decided to make the above adjustment where we can always optionally provide an explicit handler; if one is provided, the message is only handled by message propagators (e.g. a System) and the handler in question, otherwise it is handled by everybody who is registered to listen to those types of messages.

Shader parameters

Disclaimer: Rendering is always in flux – I’m trying to get something generic, extensible and easily editable working together, and it’s no easy feat.

Summary of rendering so far:

  • The currently running application state renders its root widget
  • Each widget contains body and margin renderables (2 different)
  • Each widget can contain a modal widget, or if it’s a container widget, other widgets
  • Some widgets add more renderables: e.g. textbox also has a text renderable
  • Renderables are pretty much rendering configurations, and store a reference to a renderer and to their widget owner
  • Renderers use shaders and contain rendering logic
  • A renderer renders a single renderable type, a renderable can be rendered by several renderer types

Before, the configuration was via explicit parameters in an inheritance chain. While it’s explicit, it’s a PAIN to add parameters, as it’s compile-time. So I ditched that approach, and used a far more generic approach. Now every renderable stores, among other things:

  • A list of references to textures
  • A list of dynamic textures, along with a message updater for each
  • A list of texture buffers, along with a message updater for each
  • A reference to a blending configuration
  • A list of shader variables, organized as:
    • a vector of (name, value) pairs, for every common shader type (int, float, int2, float4, etc)
    • a vector of (name, texture_buffer_index)
    • a vector of (name, texture_index)
    • a vector of (name, dynamic_texture_index)

So far, this is looking flexible and I like it. Of course it’s far from optimal, but it is optimal for prototyping, and that’s what matters now. For performance, variables could be organized in uniform buffer objects of varying frequency of updates, etc, but that’s far down the line.

Above there’s a screen from the modification of the A* visualizer to operate on graphs — just minimal changes needed from existing infrastructure:

  • There is a new renderer instance of the type GridSparseSelectionRenderer — it’s used for rendering lines.
  • There are a few renderables: for the node points, for the start point, for the goal points (of course horribly inefficient, I might as well draw all points at once and assign per-instance colors, but that’s not the point here), for the edges and for the edges that are part of the output path.
{ "gs_nodes" : { 
    "@factory" : "RenderableTileGridWidgetSelection",
    "Renderer" : "GridSparse",
    "TextureBuffers" : [
        {
            "first" : {"format" : "rg16i", "usage" : "DynamicDraw", "Flyweight" : false, "max_elements": 2000}, // let memory be initialized at first update
            "second" : "TileSelectionChangedToTextureBuffer"
        }
    ],
    "ShaderParams" : {
        "g_Color" : {"type" : "color", "value" : [0,0,255,100]},
        "g_Tiles" : {"type" : "texture_buffer", "value" : 0}
    }
}},
{ "gs_edges" : { 
    "@factory" : "RenderableTileGridWidgetSelection",
    "Renderer" : "GridSparseLine",
    "TextureBuffers" : [
        {
            "first" : {"format" : "rg16i", "usage" : "DynamicDraw", "element_size" : 2, "Flyweight" : false, "max_elements": 2000}, 
            "second" : "TileSelectionChangedToTextureBuffer"
        }
    ],
    "ShaderParams" : {
        "g_Color" : {"type" : "color", "value" : [128,128,128,255]},
        "g_LineThickness" : {"type" : "float", "value" : 1.0},
        "g_LinePoints" : {"type" : "texture_buffer", "value" : 0}
    }
}},
{ "gs_edges_path" : { 
    "@factory" : "RenderableTileGridWidgetSelection",
    "Renderer" : "GridSparseLine",
    "TextureBuffers" : [
        {
            "first" : {"format" : "rg16i", "usage" : "DynamicDraw", "element_size" : 2, "Flyweight" : false, "max_elements": 2000}, 
            "second" : "TileSelectionChangedToTextureBuffer"
        }
    ],
    "ShaderParams" : {
        "g_Color" : {"type" : "color", "value" : [128,255,128,255]},
        "g_LineThickness" : {"type" : "float", "value" : 2.0},
        "g_LinePoints" : {"type" : "texture_buffer", "value" : 0}
    }
}},
{ "gs_start" : { 
    "@inherit" : "gs_flashing",
    "ShaderParams" : {
        "g_Color" : {"type" : "color", "value" : [255,0,0,255]}
    }
}},
{ "gs_goals" : { 
    "@inherit" : "gs_flashing",
    "ShaderParams" : {
        "g_Color" : {"type" : "color", "value" : [0,255,0,255]}
    }
}},