Messaging

Prelude: the reason for the existence of this post, while I should be testing A* solving on graphs and rendering, is because of the coupling between rendering and logic. In the logic, I had to manually,explicitly set the data that the renderable  Рan object that has a description of a render configuration, e.g. textures, buffers, parameters Рwould use. I disliked that part, so I wanted proper decoupling, via messages/events.

In the original version of the engine, I was using entityx that provided an Entity-Component-system implementation, along with some event handling. I ended up changing the whole thing as it was not suitable anymore, except the event handling. Alas, the event handling needs to go now too, as it unfortunately was implemented too rigidly for my taste: example follows.

  • System (being an event receiver) subscribes to an event type (e.g. CollisionEvent)
  • System implements a void receive( CollisionEvent ) that is mapped as the receiver function
  • Entities can emit an event, system can handle it directly

Now that looks fine, but then I got into scenarios as follows. The game logic says “Hey, I finished with some pathfinding result, in case anybody’s interested”. Event sent. Now we should have receivers that could handle that. For example, our rendering system could listen to that event. The rendering system then needs to know which renderables would be interested for this event and could update some render data with it. That’s the point where I realized I’d have to make serious changes to the event handling code to manage this sort of freedom adequately. So, as the library wasn’t that big, I stopped using it, and I introduced my own solution, which is supposed to be lightweight, and custom to my needs.

Messages

Messages are simple structs of strictly reference variables, generated from a python script. For example:

"EntityCreated" : [
    ('cEntity', 'Entity', 'class')
],

generates the following:

class cEntity;

struct cEntityCreated : public cMessage
{
    explicit cEntityCreated( const cEntity& zEntity)
    :cMessage(msType),mEntity(zEntity){}

    const cEntity& mEntity;

    static const int msType = 2;
};

All messages derive from a base “cMessage” class.

There’s some global function to emit messages, that passes the messages to all systems, and each system stores a list of message handlers per message type – so, every time a system needs to post a message, it grabs the appropriate list of message handlers, and sends the message to every one of them. So, the message handlers need to be able to handle a generic message.

Example: a renderable stores, among other things, a list of texture buffers and dynamic textures, which could optionally be used for updating gpu data useful for rendering. Example: we have a path that we want to render, represented as a set of integer points (x,y), stored in a texture buffer. Another example: we want to visualize the results of a grid search, by putting the gscore values in a dynamic texture.

But we don’t want to hard-code anything, so we specify the format in the configuration file. What I’m looking forward to would look like in json:

"texture_buffers" : [
    { 
        "config" : {
            "format" : "rg16i",
            "memory" : 8192,
            "usage" : "DynamicDraw"
        },
        "updater" : { "@inherit" : "MessageUpdaterSearchGridPathToTbo" } 
    }
]

In the above partial definition of a renderable, we say that it will have a single texture buffer, with the specified format, a maximum memory of 8Kbytes and it will update its contents automatically using “MessageUpdaterSearchGridPathToTbo”. So, what’s the latter? It’s a very simple polymorphic interface, that defines a function:

template<class Source, class Target>
class cTransformer
{
    virtual void Transform(Target& obj, const Source& msg) const = 0;
}

Using the type family machinery for easily specifying factories in JSON, we can define classes as follows:

//! SearchGridPath:Path to Tbo
class cMessageUpdaterSearchGridPathToTbo : public cTransformer<msg::cSearchGridResult, gfx::cTextureBuffer>
{
public:
    void Transform(gfx::cTextureBuffer& obj, const msg::cSearchGridResult& msg) const override;
};

//! SearchGridPath:Visited to Tbo
class cMessageUpdaterSearchGridVisited : public cTransformer<msg::cSearchGridResult, gfx::cTextureBuffer>
{
public:
    void Transform(gfx::cTextureBuffer& obj, const msg::cSearchGridResult& msg) const override;
};

So, what do we achieve in the end? Proper decoupling! The C++ code does not specify what will be rendered – it will just send messages “my grid search was completed” or “my graph search was completed”. The C++ code won’t specify in the game code how something will react to a message – this will be done by interfaces, which are assigned based on json. In json, renderables specify who’s listening to what, and what actions will be taken upon receipt of the messages. So, json acts like a bunch of cables, and the game logic, the rendering, and the updater specification are all independent black boxes, not knowing of each other. I expect to have this working by next week hopefully, even though things are looking quite busy.

Pathfinding and visualization

 

The core libraries are now done (or so I thought), so I’m slowly refactoring the game libraries, one component at a time. But because nothing escapes my refactoring extravaganza, I discovered another victim in the core libraries, ready to be refactored mercilessly: the grid/graph pathfinding. So, my original idea is to make the pathfinders objects that keep some state and run one iteration at a time, as I envisioned that I might interrupt an A* calculation halfway through. Well, thinking about it, that’s nonsense, so … refactor away.

Now the pathfinding routine is just a function with 3 arguments:

  • A config struct which, as the name implies, provides configuration/setup for the search execution: start point, goals, heuristics, search space etc.
  • A state struct which maintains the algorithm state: fscore/gscore maps, the priority queue, etc.
  • An output struct which stores the output from the search, such as the path, path costs (optional), visited nodes (optional), etc.

To aid debugging, I’ve added a conditionally compiled part that records state and output at every search iteration. After this semantic separation in these structures, recording the state is now a breeze! I just keep a vector of states/outputs per iteration to see their “history” and how they evolve.

So, with the nicely modularized grid pathfinder function and the newly refactored rendering system, we can implement an application that visualizes pathfinding (inspired by Amit’s super awesome A* page, among others). I’ve made an application that uses ImGui and my widgets (the tilegrid one to be precise), so that I can dynamically modify weights on the map, as well as start and goals, heuristics, 4 or 8 connectivity, etc. Additionally, several visualization options are provided as overlays: visited cells, gscore, fscore, the calculated path, start/end points, etc, all as layers: gscore, fscore and weights exist for all cells, so they are rendered using a dense tile renderer, while start, goals, visited cells are rendered using a sparse tile renderer. The tilegrid widget uses a group renderer, tha renders all the layers specified. Way too much text and way too little action, here’s a video capture of the visualizer:

 

Next thing to do is make sure the graph pathfinder works well, and then continue the refactoring quest, which is totally worth it.