Porting to Unity VII: Sprite rendering

It’s fun to run character simulations and generate results, it’s even more fun when you can actually see the characters. These past two weeks I’ve been writing the sprite rendering system, that doesn’t utilize gameobjects/monobehaviours, or unity spritesheets/texture atlases etc, it’s all homemade. So, how is it done?


For sprite rendering, there’s no alternative, plain and simple. We have a very simple geometry (a quad), and we want to render N instances of it, each with its own parameters. The main parameters, for sprites, are:

  • Rect location in texture atlas
  • Number of animation frames (stored sequentially in atlas)
  • Current location in the world
  • Previous location in the world (moving sprites only)
  • Time at previous location in the world
    (moving sprites only)
  • Movement speed (moving sprites only)

From the above one can see that for moving sprites we need more data, actually it turns out to be twice as much. We can also reason that the list with the moving sprites needs to be updated far more often, due to the changing positions. Therefore, it makes sense to have two separated entity sprite lists, the static and the dynamic, each of which contains instance data that will be rendered with Unity’s DrawMeshInstancedIndirect function.

Entity sprite list maintenance

  • Entity enters map: add its default/idle animation to the static list
  • Entity exits map: remove animation from static and dynamic lists
  • Entity teleports: remove animation from dynamic list,
    add it’ default/idle animation to the static list
  • Entity moves to adjacent location: remove animation from static list, add its moving animation to the dynamic list
  • Every several frames, we go through the dynamic list and check if an animation has finished. If it has, we remove it from the list and add the entity’s default/idle animation to the static list

For the above, when we add an animation, we also set the instance data to a CPU buffer. When we render, the CPU buffer contents are copied to a GPU ComputeBuffer (if anything has changed), which is sampled in the shader

Here’s an example where every second, we schedule a movement to 10,000 entities. Runs pretty well except the last bullet point above, which I’ve disabled for this video. The video also shows an additional GUI sprite rendering layer with highlighted tiles (yellow squares) and the hovered tile is shown via a greenish sprite.

Porting to Unity VI: RPG elements, Time

This week I ported another bulk of data, namely RPG-system related stuff such as attributes, skills, skill categories, skill mastery levels and level-up strategies. The data all became scriptable objects (as they’re constant assets), and all except the level-up strategies were enum-ified as it is quite likely that in the code I’ll be referring to particular skills/attributes/etc (e.g. if skills[dexterity] > 10, do something)

I’ve added a much-needed global object with all the game configuration parameters (e.g. max character levels, skill points per level, etc), while in C++ it was pretty much a dictionary loaded from json to avoid recompiles, here I have it as a scriptable object with fixed fields, so access is faster, compiles fast and data are interchangeable.

Another addition is a very simplified form of date/time. I opt out of using C#’s DateTime or TimeSpan as they are for real-world time and therefore do not allow flexibility. What I’m using instead is a simple long value wrapped in a struct (called TimeUnit), representing microseconds. The struct stores loads of constants and helpers for printing and manipulating, so that in the end a TimeUnit is just an integral type with benefits. 64 bits support quite a bit of a range, so it should be enough for the game’s time system resolution.

Porting to Unity V: Danger!

Since last time, I’ve tackled (or not) a few issues.

NO gameobjects/monobehaviours in the game database / ECS.

First and more importantly, I’ve made the decision to represent entities with a custom, very lightweight class, that does not derive from any Unity stuff. Reason being that I don’t want to interact too much with Unity, especially derive from these classes that carry a lot of baggage. I’m pretty happy to manage entity lifetimes and bite the bullet in terms of inspector “niceties”, rather than start using GameObjects and then, when these are tangled deep in the game code, I realize that it’s actually a terrible idea and it gets too slow. So that’s that.

Entities are represented now with custom classes, and to avoid potential infinite serialization depth that Unity is not happy about, I use weak references. So, instead of:

city entity -> city component -> owned mines (entities) -> mine entity -> membership component -> owner entity -> city entity -> …

… which causes an infinite loop, I use the following:

city entity -> city component -> owned mines (entity references)

so the chain is broken there. The references are just IDs that look up in a pool. So that’s that. In the meantime, I’m trying to make the inspector show in place of the entity references the entity contents, but currently I’m not doing a good job. Revisit later.

Danger maps

This is simply the danger map calculation (what level of encounters one might find at a certain point on the overworld map), which involves distance field calculations to cities and routes and some poisson sampling, both of which I’m calculating in C++ via a native plugin. So, performance remains nice, which is nice. Here it is, alongside the biome/routes map. Red is dangerous, blue is safe


Implementing visualizers in Unity is a breeze, and I love being able to see what I hover over and see data, data, data. In terms of pixels or values in the inspector. I captured a video to show how this looks like, I love it đŸ™‚


Last but not least, I ported the RPG-related datafiles (skills, attributes, skill masteries, etc). So, next work is going to be adventure location generation, and maybe character generation and levelling.

Porting to Unity IV: Pathfinding

Fairly straightforward work this week, porting pathfinding code from C++ to C#. The first two ports were of the basic low-level pathfinding functionality, namely A* on a grid (SearchGrid) and a graph (SearchGraph). This is the code shown in this post

The other ports were of the higher level wrappers that utilize both of the above. There’s a “basic” wrapper and an “overworld” wrapper. The basic wrapper is used as a reference to evaluate other wrappers and uses a dense grid, so it takes the longest but it would return the best results.

The overworld wrapper is the one in this post.

The performance currently, without any sort of profiling, is about 1ms per overworld path, in the Unity Editor where I’m working all the time. Compared to 0.22ms for Release/C++ and 3.37ms for Debug/C++ it’s not bad at all!

Porting to Unity III: Caching 2D Arrays

So far, I’ve ported the biome map, resource map, city basics, faction generation, and a few more things. They all pale to the effort that I’ve put for a seemingly simple process:

“Ok, we generated a large 2d array of data after a lot of calculations, now how do we save and load it from disk?”

There are a lot of issues, pitfalls, etc that resulted from this question and associated actions, that have lasted for almost 2 weeks. So:

How do we represent Array2D?

Question number one. Enchanted by the simplicity of SomeType[,] , I started using that. There are a few issues:

  • Serializers have issues with multidimensional arrays, there’s a zoo of serialization options out there, each with their own little problems and idiosyncracies. Ugh
  • Indexing is [y,x] like Matlab (and I presume other things). I can understand why, but for my computer graphics brain and muscle-memory, this is a no-no (I’ve been always using [x,y] and [width,height] ) and could be a cause for a billion bugs.
  • Cannot cast to 1D array, so operations applicable to 1D arrays are not applicable here. This would mean re-writing code for 2D case ( e.g. max element in array, max value in array, etc etc)

So, I decided to nip it in the bud early and use my own class, which is a simple wrapper of 1d array, and thus can play along with everything. Hooray. For access it uses [x,y] or [Vector2Int].

Array2D for structs and classes

Yet another potential cause with structs or classes. Everybody keeps hammering “use structs for immutable types” and for a good reason. For my simple Array2D wrapper it’s impossible to do the following with a struct Foo:

var foo = Array2D<Foo>(2,2); foo[0,1].someField = 32; // Forget about it.

What we’ve done here with foo[0,1] is return a copy of the element and set .someField to that copy, that will be immediately destroyed. We can’t set just someField, we need to replace the entire struct, because we can’t return references to structs in Unity C#. So, the best alternative is to disable this altogether otherwise hello insidious bugs. We can add an extension method for the array ( e.g. at( int x, int y) ) that restricts the type to be a class and returns a reference.

How do we serialize Array2D?

That’s easy, right? Use Unity facilities, add a few [Serializable] attributes and we’re good to go. Well, not so easy. What-a-waste. So using standard serialization facilities, we’re serializing every object individually, and that contains a crapload of redundancies. So, a simple 512×512 map of uint32 takes 12 megabytes instead of 1. Lovely. The “Biome” map class that uses enums (it used to be bitfields) now takes 60 megabytes. No thanks. I went on a journey to find a nice way to do cheap serialization for at least these use-cases, without using some XYZ serialization library that will be broken with the next Unity update or will go unsupported, or has those other weird limitations. Therefore I wrote some code to be able to serialize classes into a stream of bytes, this being a pain on its own. Error prone? Yes, but at least testable. Fast? Oh yes. Space-efficient? Oh yes. Using the binary serializer, I jumped from 60Mb down to 1MB, and the serialize/deserialize performance improved by 30x. So, back to reasonable-land again. The only “issue” Is that I have to implement an interface for every new type that can be used in an Array2D that’s going to be cached, so no big deal really.

Random: System or Unity

So this is the last thorn, I started using Unity’s as it’s usable, but there’s only a single one so that’s not future proof, as I’d like to have a bit more control (maybe multiple rngs, alternate algorithm, etc). So, I started extending System.Random with convenience functions. So far so good, but I ended up with a several-hour bughunt where I’d create a copy of the rng, replace the original during a caching operation, and then using the (by now unique) copy, leading to different behaviour with seemingly same parameters. Takeaway: don’t make copies of the rng, get refs to it.


Next in the port party is Pathfinding, Movement, Territory systems and some visualizations to make sure everything is working as expected

Porting to Unity II : ECS basic form

Immediately after the biome and resource maps generation, next is the city placement. We’re getting near the first “interaction” with the entity system, as at the end of this step, we should have city entities. Therefore, it becomes apparent that there needs to be a basic ECS machinery in place.

ECS using ScriptableObject

Doubt No 1: Custom machinery for efficient component access.

In the C++ version of the entity system, each component type had their own storage vector, and the entity system did the following to access a component:

  • Use the entity ID to look-up its component-index data, and the required component ID to look-up if the entity has valid index for the component, and if it does, use the retrieved index to look-up in the component storage

So this does involve a few lookups/jumps, so I wouldn’t call it too optimized or cache-friendly.

Doubt No 2:  Entities adjust their components at runtime.

Typical ECS allows for runtime composition of components, using the aforementioned machinery. Adding a component is as simple as allocating a component and storing the index in the “component index” entry of the entity. Question is, when do we need that? Answer is, depends on how granular the components are, and how dynamic some of these component types can be. From my theoretical design so far, adding components dynamically is a solution to a future problem, while it creates other problems now. Problems such us opportunities for nonsensical component combinations (city having a character sheet, adventure location having movement component, etc).

Components as ScriptableObjects

Reading about Unity one reads a lot about “the tyranny of MonoBehaviour” and how heavyweight MonoBehaviours are. At the same time, it looks like the consensus says “Use the fancy new scriptable objects”. Therefore, wanting to go the suggested route, and because components should just store data, ScriptableObjects become a quite reasonable solution (so far) to that issue.

Entities as ScriptableObjects, interfaces and components

C# is an interface-happy language, so I decided to go that route. Here’s the current line of thought:

  • The base Entity class now is a ScriptableObject
  • The entity-has-such-and-such-components composition is implemented via interfaces
  • There are subclasses like CityEntity, DungeonEntity, CreatureEntity, etc. These subclasses implement the interfaces they need

The Entity types and interfaces code is generated from a json file and looks like:

namespace cmp // namespace for components
    public class Location : ScriptableObject
        public Vector2Int tile;
        public int mapId;
    // more components ...

// interfaces for accessing components
interface ICity { cmp.City City(); }
interface IAdvLoc { cmp.AdvLoc AdvLoc(); }
interface ICharSheet { cmp.CharSheet CharSheet(); }
interface IController { cmp.Controller Controller(); }
interface IMembership { cmp.Membership Membership(); }
interface ILocation { cmp.Location Location(); }
interface IMovement { cmp.Movement Movement(); }

public class Entity : ScriptableObject {}

public class CityEntity : Entity, ICity, ILocation
    private cmp.City city;
    public cmp.City City() { return city; }
    private cmp.Location location;
    public cmp.Location Location() { return location; }

Now we can check if an entity called “someEntity” implements an interface to see if it has a location component simply by “someEntity is ILocation” The above code was autogenerated from python using a small dictionary:

entity_data = {
    "Creature" : ["Movement","Location", "CharSheet","Membership","Controller"],
    "City" : ["City","Location"],
    "AdvLoc" : ["AdvLoc","Location"],
    "Level" : ["Location"]

To wrap up with the ECS basics we still need to define the systems and where they live, as well as message handling, but that’s for next time. Also in the future I’ll possibly need to sort out the access so that’s it’s generally read-only, but that might be tricky.

Porting to Unity

Over the holidays, I’ve been thinking about the party system and its complications, and tried to move on with more code. I caught myself, before doing any fun stuff, I still have to write some boilerplate code for class reflection ( print out some structs, read from json, serialize for save files etc). This is very cumbersome and it’s not automated, unless I’m only writing class definitions using code generators. I’ve actually written 3 code generators so far: for message definitions, action commands and components. So, the lack of reflection in c++ ends up hurting quite a bit, and I like data-driven development that is painful without reflection.

So, given that recently for work I started again with Unity (had some brief encounters before), I thought maybe Unity might solve the unfun issues, and of course I’m not fooling myself it will add other issues. So, I made a list of my woes with the current development “environment”:

  • Lack of reflection: reason for implementing (over years now) resource loading, serialization, streaming operators etc etc for every single class that I use. Using:
    • JsonCPP for loading configuration from json
    • Cereal for save/load functionality
    • Streaming operators for string formatting
  • Profiling: Visual studio profiler takes a while, I’d have to implement engine hooks myself for something faster & more bespoke. Graphics debugging is not great; apitrace is a nice tool, but still.
  • OpenGL is tricky to wrap. Very tricky. I foolishly didn’t use a wrapper, so minus points for foresight.
  • Cross-platform and web is tricky. And emscripten, as magic as it is, is not exactly trivial to integrate.
  • Compile speed (in my case) is not great. C++ supposed to be fast to compile, but I think with all the boilerplates and the suboptimal modularization of the code, it has become slow.

Unity is not supposed to be a panacea, but I think it’s going to offer a fresh perspective on how to build a game, same idea but different language, different approach for coding and data management, etc.

C++ with Unity, and a bit of woe

Since I quite like C++, I thought I’ll try to use it within Unity, in the form of native plugins. I ran a few experiments successfully, so, the question becomes, where to draw the line?

  • Nothing in C++: port everything to C#
  • A few things in C++: esp. algorithms with simple inputs/outputs that run complex/expensive calculations.
  • Most things in C++: This is an approach encountered here.

“Most things in C++” became a no-go quickly when I downloaded the latest github repo, ran the code and crashed. Since this is an experimental project and I don’t know much about C#, I think this would be more harm than good.

“Few things in C++” is the current optimistic scenario. If I encounter some code that fits the criteria, I’ll write the interop to C++. To test it, I ported part of the biome map generation code to utilize native plugins: C# runs some GPU passes, then somewhere in the middle we pass some pointers and config info to C++ that runs a CPU pass, returns the results and C# happily continues.

“Nothing in C++” is a probability. I’ve spent so-much-time dealing with Resource systems, serialization, GUI bindings for all types etc, and all these are for free with Unity’s inspector, C#’s reflection and Unity’s asset system. So, I need something refreshing at this point, and that’s game-related algorithms and content, rather than engine architecture.

The negative bit: I love bitfields and neither C# nor Unity makes my life easy with them. So all my careful bit-packing goes to hell. C# doesn’t support bitfields and Unity generates junk shader code for WebGL when use bitwise operators. This is a major pain, as many of my shaders use bitwise operators, and I can’t afford to go without. So, problem pending.

Closing with a port of the biome generator, using a native plugin for a few passes, I must confess that besides the learning curve and some of Unity’s idiosyncrasies, it’s been generally a breeze.


Up to now I’ve been using spdlog for logging. I had decided that I would have a logger per system, plus a general logger for core library code, plus a general logger for game library code, plus a logger for json loading. Each logger from spdlog supports multiple sinks (outputs). One can quickly imagine that things get convoluted/complicated.

I like the idea of log “channels” (ie categories). That was the reason for the existence of loggers per system, etc. I realized that my approach was bloated.

I decided to ditch spdlog for a simple approach, that only uses fmtlib:

  • We have log levels as usual: trace, debug, info, warn, error, fatal, off
  • There are 2 outputs: file and gui. Both are configurable
    • File output is parameterized on filename and most verbose log level
    • Gui output is parameterized on history size (max line num) and default log level
  • File output stores all logs, gui output shows only the selected channels (one or many), which helps wading through the logs
  • Start with a fixed channel list (frequently used), but allow extensions. E.g. core library channels:
    • app: application init/update related logs
    • gfx: graphics subsystem
    • gui: gui subsystem
    • load: json loading subsystem
    • core: everything else
  • Allow runtime channel creation & use

Here’s an example view of the gui console log, using imgui as usual:

Time for Action: Time and Action Management + Fast-Forward

Handling time is coupled with handling actions: The whole point of time management is handling the order of actions. I’ve done some … pre-work in the past here, so many of the concepts still apply. So:

Actions and commands

  • A EntityCommand is the basic “unit” of actions: it’s an instruction for an entity to do something, e.g. “move left in a dungeon”, “teleport there”, “Do damage to X entity”, “move north in overworld”
    • Commands happen instantly, they do not know anything about time/duration.
    • Implementation-wise, commands are stateless functors, that take a bunch of parameters and do some work. In the future, the implementation could be moved into Lua to avoid recompiles and have dynamically editable behavior
    • Commands have two functions: Execute() and OnInterrupt(). If a command that is scheduled to play gets interrupted, we call OnInterrupt instead.
  • An EntityActionConfig stores a handle to a command plus timing and interrupt information: execution/recovery durations and interrupt strength/difficulty class
    • An action happens like this:
      • Wait for execution duration (during this stage, the action can be interrupted)
      • Execute EntityCommand immediately
      • Wait for recovery duration
    • An action can interrupt another action if the interruption strength is greater than the interruption difficulty class. In this case, the execution stage of the target is cancelled and replaced with an “interrupt recovery” duration, which at the moment is half the execution duration, starting from the time of interruption.
    • EntityActionConfigs are set up in json, and are constant throughout the application.
  • An EntityActionData structure stores a handle to an EntityActionConfig plus parameters for the command to be executed.

Time system

At this point, we move onto the TimeSystem, which handles execution of actions. The TimeSystem stores a set of actions, ordered by execution time. The set data contains:

  • The entity whose turn it is
  • The time that the entity plays
  • The stage of the entity’s action (just started, execution, recovery, interrupt recovery)
  • An EntityActionData structure, storing what needs to be done with what parameters
  • A reference to the previous entry in the set, of the same entity (e.g. a “recovery” entry would store the “execute” entry)

When an AI entity plays its turn, there are two different things that can happen:

  • We don’t have an action scheduled yet, so we run AI to figure out what to do next. The AI system is responsible for filling out the EntityActionData structure (what action to execute, and which parameters). A player character, using GUI/keys would cause this structure to be filled in the same way. When we fill in the data, we schedule the execution stage which will happen after the “execute duration” if it’s not interrupted
  • We have an action scheduled, so we just execute the command and scheduled the next turn to be after the “recovery duration”. If the command fails (e.g. try to hit an entity that is now not there) we should not pay the normal recovery duration. At the moment the cost is half the recovery duration, but maybe for a failed command the cost should be zero. This is still work in progress and needs real examples (e.g. player tries to move to a wall, etc)


Fast-forward refers to coarse simulation that happens for entities that are in a different level to the player (or whatever we deem as “active” level). Fast forward will not be used for overworld entities, as the time intervals in the overworld are much larger compared to dungeons, so we won’t have performance issues simulating 1000 entities where one action takes 1 day, whereas in a dungeon a move could be several seconds.

After a lot of thought and a few test implementations, I’ve decided to keep a single list in the time system for all the game’s entities that are active. A reasonable question is “what happens to creatures in a level when a player leaves the level”? We clearly can’t afford to be simulating all levels generated ever. On the other hand, it’s not nice to just “freeze” or “reset” the level (could be fine for other games). At this point, I’ve thought (and designed the code to be supportive of) the following process: When an AI entity plays its turn and it is on an inactive level (but not the overworld), it will plan a “fast forward” action. Such actions are coarse simulation like “wander around the level”, “go pick up a fight”, “sleep”, “sentry”: these actions would take hours each. So, all the entities would always play, but the frequency of play would be drastically lower for entities in overworld or inactive dungeon levels. Maybe we can have even coarser level simulations that each lasts days or months, using the same principles.

What’s important is what happens when a level with fast-forward action-taking entities gets activated. In this case, we interrupt the entities actions and execute the OnInterrupt() command, which could place the entity randomly, run the same simulation but with a different duration (e.g. a “normal day cycle” action for 3 months that gets interrupted 1 month in, gets executed for 1 month).

Several aspects of the fast-forward system will be tested pretty soon, as the simulation will happen in the overworld and the NPC AI will be delving in dungeons. As they will enter the dungeons, the active level will be the overworld and the NPCs will be in inactive levels, therefore playing fast-forward actions like “Clear dungeon level”, “Delve to next level” and “Flee dungeon”.

Next steps

Next time will be AI revamp in addition to spawning of world events and implementing/writing some EntityCommands and EntityActionConfigs.

Towards NPC simulation

So, now that simulations now run acceptably, we need to move on from simulations to (well, simulated) reality. There’s a still big list of todos:

  • Implement turn system (I have an older implementation, so it’s not completely from scratch, maybe)
  • World occasionally generates adventure locations (dungeons)
    • Nearby cities could spread rumours about them
  • Basic city-related AI
    • City occasionally spawns quests (think town hall bulletin, paid work)
    • Factions occasionally spawn quests (guild advancement)
    • Inns occasionally spawn rumours (quests with great rewards that might or might not be true – still thinking about this)
  • Basic NPC AI
    • Actions in a city…
      • Buy rations
      • [?] Rest/heal. But maybe that will be automatic
      • Check city/faction/inn quests.
      • Meet up people at an inn and form parties
    • Actions in a dungeon…
      • Progress with an encounter
      • Flee
    • Actions in the wilderness…
      • Move towards destination (be it a city, a dungeon or an NPC/party)
        • (hmm maybe I need to consider party vs party fight simulation)
    • Actions anywhere…
      • Set destination (dungeon, city, party, etc)
    • Decision making
      • Estimate quest difficulty (expected survival. Also try to see what skills are required and compare to own skills)
      • Estimate quest importance (based on personality and goals. Rogues like gold, Paladins like righteous stuff, etc)
      • Estimate if enough rations for a quest can be purchased
  • Ability to track particular NPCs/parties (in terms of location in world map and logging)


The turn/time system will be the first in the list, as actions and AI all utilize the concept of time