Porting to Unity IV: Pathfinding

Fairly straightforward work this week, porting pathfinding code from C++ to C#. The first two ports were of the basic low-level pathfinding functionality, namely A* on a grid (SearchGrid) and a graph (SearchGraph). This is the code shown in this post

The other ports were of the higher level wrappers that utilize both of the above. There’s a “basic” wrapper and an “overworld” wrapper. The basic wrapper is used as a reference to evaluate other wrappers and uses a dense grid, so it takes the longest but it would return the best results.

The overworld wrapper is the one in this post.

The performance currently, without any sort of profiling, is about 1ms per overworld path, in the Unity Editor where I’m working all the time. Compared to 0.22ms for Release/C++ and 3.37ms for Debug/C++ it’s not bad at all!

Porting to Unity III: Caching 2D Arrays

So far, I’ve ported the biome map, resource map, city basics, faction generation, and a few more things. They all pale to the effort that I’ve put for a seemingly simple process:

“Ok, we generated a large 2d array of data after a lot of calculations, now how do we save and load it from disk?”

There are a lot of issues, pitfalls, etc that resulted from this question and associated actions, that have lasted for almost 2 weeks. So:

How do we represent Array2D?

Question number one. Enchanted by the simplicity of SomeType[,] , I started using that. There are a few issues:

• Serializers have issues with multidimensional arrays, there’s a zoo of serialization options out there, each with their own little problems and idiosyncracies. Ugh
• Indexing is [y,x] like Matlab (and I presume other things). I can understand why, but for my computer graphics brain and muscle-memory, this is a no-no (I’ve been always using [x,y] and [width,height] ) and could be a cause for a billion bugs.
• Cannot cast to 1D array, so operations applicable to 1D arrays are not applicable here. This would mean re-writing code for 2D case ( e.g. max element in array, max value in array, etc etc)

So, I decided to nip it in the bud early and use my own class, which is a simple wrapper of 1d array, and thus can play along with everything. Hooray. For access it uses [x,y] or [Vector2Int].

Array2D for structs and classes

Yet another potential cause with structs or classes. Everybody keeps hammering “use structs for immutable types” and for a good reason. For my simple Array2D wrapper it’s impossible to do the following with a struct Foo:

var foo = Array2D<Foo>(2,2); foo[0,1].someField = 32; // Forget about it.

What we’ve done here with foo[0,1] is return a copy of the element and set .someField to that copy, that will be immediately destroyed. We can’t set just someField, we need to replace the entire struct, because we can’t return references to structs in Unity C#. So, the best alternative is to disable this altogether otherwise hello insidious bugs. We can add an extension method for the array ( e.g. at( int x, int y) ) that restricts the type to be a class and returns a reference.

How do we serialize Array2D?

That’s easy, right? Use Unity facilities, add a few [Serializable] attributes and we’re good to go. Well, not so easy. What-a-waste. So using standard serialization facilities, we’re serializing every object individually, and that contains a crapload of redundancies. So, a simple 512×512 map of uint32 takes 12 megabytes instead of 1. Lovely. The “Biome” map class that uses enums (it used to be bitfields) now takes 60 megabytes. No thanks. I went on a journey to find a nice way to do cheap serialization for at least these use-cases, without using some XYZ serialization library that will be broken with the next Unity update or will go unsupported, or has those other weird limitations. Therefore I wrote some code to be able to serialize classes into a stream of bytes, this being a pain on its own. Error prone? Yes, but at least testable. Fast? Oh yes. Space-efficient? Oh yes. Using the binary serializer, I jumped from 60Mb down to 1MB, and the serialize/deserialize performance improved by 30x. So, back to reasonable-land again. The only “issue” Is that I have to implement an interface for every new type that can be used in an Array2D that’s going to be cached, so no big deal really.

Random: System or Unity

So this is the last thorn, I started using Unity’s as it’s usable, but there’s only a single one so that’s not future proof, as I’d like to have a bit more control (maybe multiple rngs, alternate algorithm, etc). So, I started extending System.Random with convenience functions. So far so good, but I ended up with a several-hour bughunt where I’d create a copy of the rng, replace the original during a caching operation, and then using the (by now unique) copy, leading to different behaviour with seemingly same parameters. Takeaway: don’t make copies of the rng, get refs to it.

Next TODO

Next in the port party is Pathfinding, Movement, Territory systems and some visualizations to make sure everything is working as expected

Porting to Unity II : ECS basic form

Immediately after the biome and resource maps generation, next is the city placement. We’re getting near the first “interaction” with the entity system, as at the end of this step, we should have city entities. Therefore, it becomes apparent that there needs to be a basic ECS machinery in place.

ECS using ScriptableObject

Doubt No 1: Custom machinery for efficient component access.

In the C++ version of the entity system, each component type had their own storage vector, and the entity system did the following to access a component:

• Use the entity ID to look-up its component-index data, and the required component ID to look-up if the entity has valid index for the component, and if it does, use the retrieved index to look-up in the component storage

So this does involve a few lookups/jumps, so I wouldn’t call it too optimized or cache-friendly.

Doubt No 2:  Entities adjust their components at runtime.

Typical ECS allows for runtime composition of components, using the aforementioned machinery. Adding a component is as simple as allocating a component and storing the index in the “component index” entry of the entity. Question is, when do we need that? Answer is, depends on how granular the components are, and how dynamic some of these component types can be. From my theoretical design so far, adding components dynamically is a solution to a future problem, while it creates other problems now. Problems such us opportunities for nonsensical component combinations (city having a character sheet, adventure location having movement component, etc).

Components as ScriptableObjects

Reading about Unity one reads a lot about “the tyranny of MonoBehaviour” and how heavyweight MonoBehaviours are. At the same time, it looks like the consensus says “Use the fancy new scriptable objects”. Therefore, wanting to go the suggested route, and because components should just store data, ScriptableObjects become a quite reasonable solution (so far) to that issue.

Entities as ScriptableObjects, interfaces and components

C# is an interface-happy language, so I decided to go that route. Here’s the current line of thought:

• The base Entity class now is a ScriptableObject
• The entity-has-such-and-such-components composition is implemented via interfaces
• There are subclasses like CityEntity, DungeonEntity, CreatureEntity, etc. These subclasses implement the interfaces they need

The Entity types and interfaces code is generated from a json file and looks like:

namespace cmp // namespace for components
{
public class Location : ScriptableObject
{
public Vector2Int tile;
public int mapId;
}
// more components ...
}

// interfaces for accessing components
interface ICity { cmp.City City(); }
interface ICharSheet { cmp.CharSheet CharSheet(); }
interface IController { cmp.Controller Controller(); }
interface IMembership { cmp.Membership Membership(); }
interface ILocation { cmp.Location Location(); }
interface IMovement { cmp.Movement Movement(); }

public class Entity : ScriptableObject {}

public class CityEntity : Entity, ICity, ILocation
{
private cmp.City city;
public cmp.City City() { return city; }
private cmp.Location location;
public cmp.Location Location() { return location; }
}

Now we can check if an entity called “someEntity” implements an interface to see if it has a location component simply by “someEntity is ILocation” The above code was autogenerated from python using a small dictionary:

entity_data = {
"Creature" : ["Movement","Location", "CharSheet","Membership","Controller"],
"City" : ["City","Location"],
"Level" : ["Location"]
}

To wrap up with the ECS basics we still need to define the systems and where they live, as well as message handling, but that’s for next time. Also in the future I’ll possibly need to sort out the access so that’s it’s generally read-only, but that might be tricky.

Porting to Unity

Over the holidays, I’ve been thinking about the party system and its complications, and tried to move on with more code. I caught myself, before doing any fun stuff, I still have to write some boilerplate code for class reflection ( print out some structs, read from json, serialize for save files etc). This is very cumbersome and it’s not automated, unless I’m only writing class definitions using code generators. I’ve actually written 3 code generators so far: for message definitions, action commands and components. So, the lack of reflection in c++ ends up hurting quite a bit, and I like data-driven development that is painful without reflection.

So, given that recently for work I started again with Unity (had some brief encounters before), I thought maybe Unity might solve the unfun issues, and of course I’m not fooling myself it will add other issues. So, I made a list of my woes with the current development “environment”:

• Lack of reflection: reason for implementing (over years now) resource loading, serialization, streaming operators etc etc for every single class that I use. Using:
• Streaming operators for string formatting
• Profiling: Visual studio profiler takes a while, I’d have to implement engine hooks myself for something faster & more bespoke. Graphics debugging is not great; apitrace is a nice tool, but still.
• OpenGL is tricky to wrap. Very tricky. I foolishly didn’t use a wrapper, so minus points for foresight.
• Cross-platform and web is tricky. And emscripten, as magic as it is, is not exactly trivial to integrate.
• Compile speed (in my case) is not great. C++ supposed to be fast to compile, but I think with all the boilerplates and the suboptimal modularization of the code, it has become slow.

Unity is not supposed to be a panacea, but I think it’s going to offer a fresh perspective on how to build a game, same idea but different language, different approach for coding and data management, etc.

C++ with Unity, and a bit of woe

Since I quite like C++, I thought I’ll try to use it within Unity, in the form of native plugins. I ran a few experiments successfully, so, the question becomes, where to draw the line?

• Nothing in C++: port everything to C#
• A few things in C++: esp. algorithms with simple inputs/outputs that run complex/expensive calculations.
• Most things in C++: This is an approach encountered here.

“Most things in C++” became a no-go quickly when I downloaded the latest github repo, ran the code and crashed. Since this is an experimental project and I don’t know much about C#, I think this would be more harm than good.

“Few things in C++” is the current optimistic scenario. If I encounter some code that fits the criteria, I’ll write the interop to C++. To test it, I ported part of the biome map generation code to utilize native plugins: C# runs some GPU passes, then somewhere in the middle we pass some pointers and config info to C++ that runs a CPU pass, returns the results and C# happily continues.

“Nothing in C++” is a probability. I’ve spent so-much-time dealing with Resource systems, serialization, GUI bindings for all types etc, and all these are for free with Unity’s inspector, C#’s reflection and Unity’s asset system. So, I need something refreshing at this point, and that’s game-related algorithms and content, rather than engine architecture.

The negative bit: I love bitfields and neither C# nor Unity makes my life easy with them. So all my careful bit-packing goes to hell. C# doesn’t support bitfields and Unity generates junk shader code for WebGL when use bitwise operators. This is a major pain, as many of my shaders use bitwise operators, and I can’t afford to go without. So, problem pending.

Closing with a port of the biome generator, using a native plugin for a few passes, I must confess that besides the learning curve and some of Unity’s idiosyncrasies, it’s been generally a breeze.

Logging

Up to now I’ve been using spdlog for logging. I had decided that I would have a logger per system, plus a general logger for core library code, plus a general logger for game library code, plus a logger for json loading. Each logger from spdlog supports multiple sinks (outputs). One can quickly imagine that things get convoluted/complicated.

I like the idea of log “channels” (ie categories). That was the reason for the existence of loggers per system, etc. I realized that my approach was bloated.

I decided to ditch spdlog for a simple approach, that only uses fmtlib:

• We have log levels as usual: trace, debug, info, warn, error, fatal, off
• There are 2 outputs: file and gui. Both are configurable
• File output is parameterized on filename and most verbose log level
• Gui output is parameterized on history size (max line num) and default log level
• File output stores all logs, gui output shows only the selected channels (one or many), which helps wading through the logs
• Start with a fixed channel list (frequently used), but allow extensions. E.g. core library channels:
• app: application init/update related logs
• gfx: graphics subsystem
• gui: gui subsystem
• core: everything else
• Allow runtime channel creation & use

Here’s an example view of the gui console log, using imgui as usual:

Time for Action: Time and Action Management + Fast-Forward

Handling time is coupled with handling actions: The whole point of time management is handling the order of actions. I’ve done some … pre-work in the past here, so many of the concepts still apply. So:

Actions and commands

• A EntityCommand is the basic “unit” of actions: it’s an instruction for an entity to do something, e.g. “move left in a dungeon”, “teleport there”, “Do damage to X entity”, “move north in overworld”
• Commands happen instantly, they do not know anything about time/duration.
• Implementation-wise, commands are stateless functors, that take a bunch of parameters and do some work. In the future, the implementation could be moved into Lua to avoid recompiles and have dynamically editable behavior
• Commands have two functions: Execute() and OnInterrupt(). If a command that is scheduled to play gets interrupted, we call OnInterrupt instead.
• An EntityActionConfig stores a handle to a command plus timing and interrupt information: execution/recovery durations and interrupt strength/difficulty class
• An action happens like this:
• Wait for execution duration (during this stage, the action can be interrupted)
• Execute EntityCommand immediately
• Wait for recovery duration
• An action can interrupt another action if the interruption strength is greater than the interruption difficulty class. In this case, the execution stage of the target is cancelled and replaced with an “interrupt recovery” duration, which at the moment is half the execution duration, starting from the time of interruption.
• EntityActionConfigs are set up in json, and are constant throughout the application.
• An EntityActionData structure stores a handle to an EntityActionConfig plus parameters for the command to be executed.

Time system

At this point, we move onto the TimeSystem, which handles execution of actions. The TimeSystem stores a set of actions, ordered by execution time. The set data contains:

• The entity whose turn it is
• The time that the entity plays
• The stage of the entity’s action (just started, execution, recovery, interrupt recovery)
• An EntityActionData structure, storing what needs to be done with what parameters
• A reference to the previous entry in the set, of the same entity (e.g. a “recovery” entry would store the “execute” entry)

When an AI entity plays its turn, there are two different things that can happen:

• We don’t have an action scheduled yet, so we run AI to figure out what to do next. The AI system is responsible for filling out the EntityActionData structure (what action to execute, and which parameters). A player character, using GUI/keys would cause this structure to be filled in the same way. When we fill in the data, we schedule the execution stage which will happen after the “execute duration” if it’s not interrupted
• We have an action scheduled, so we just execute the command and scheduled the next turn to be after the “recovery duration”. If the command fails (e.g. try to hit an entity that is now not there) we should not pay the normal recovery duration. At the moment the cost is half the recovery duration, but maybe for a failed command the cost should be zero. This is still work in progress and needs real examples (e.g. player tries to move to a wall, etc)

Fast-Forward

Fast-forward refers to coarse simulation that happens for entities that are in a different level to the player (or whatever we deem as “active” level). Fast forward will not be used for overworld entities, as the time intervals in the overworld are much larger compared to dungeons, so we won’t have performance issues simulating 1000 entities where one action takes 1 day, whereas in a dungeon a move could be several seconds.

After a lot of thought and a few test implementations, I’ve decided to keep a single list in the time system for all the game’s entities that are active. A reasonable question is “what happens to creatures in a level when a player leaves the level”? We clearly can’t afford to be simulating all levels generated ever. On the other hand, it’s not nice to just “freeze” or “reset” the level (could be fine for other games). At this point, I’ve thought (and designed the code to be supportive of) the following process: When an AI entity plays its turn and it is on an inactive level (but not the overworld), it will plan a “fast forward” action. Such actions are coarse simulation like “wander around the level”, “go pick up a fight”, “sleep”, “sentry”: these actions would take hours each. So, all the entities would always play, but the frequency of play would be drastically lower for entities in overworld or inactive dungeon levels. Maybe we can have even coarser level simulations that each lasts days or months, using the same principles.

What’s important is what happens when a level with fast-forward action-taking entities gets activated. In this case, we interrupt the entities actions and execute the OnInterrupt() command, which could place the entity randomly, run the same simulation but with a different duration (e.g. a “normal day cycle” action for 3 months that gets interrupted 1 month in, gets executed for 1 month).

Several aspects of the fast-forward system will be tested pretty soon, as the simulation will happen in the overworld and the NPC AI will be delving in dungeons. As they will enter the dungeons, the active level will be the overworld and the NPCs will be in inactive levels, therefore playing fast-forward actions like “Clear dungeon level”, “Delve to next level” and “Flee dungeon”.

Next steps

Next time will be AI revamp in addition to spawning of world events and implementing/writing some EntityCommands and EntityActionConfigs.

Towards NPC simulation

So, now that simulations now run acceptably, we need to move on from simulations to (well, simulated) reality. There’s a still big list of todos:

• Implement turn system (I have an older implementation, so it’s not completely from scratch, maybe)
• World occasionally generates adventure locations (dungeons)
• Basic city-related AI
• City occasionally spawns quests (think town hall bulletin, paid work)
• Factions occasionally spawn quests (guild advancement)
• Inns occasionally spawn rumours (quests with great rewards that might or might not be true – still thinking about this)
• Basic NPC AI
• Actions in a city…
• [?] Rest/heal. But maybe that will be automatic
• Check city/faction/inn quests.
• Meet up people at an inn and form parties
• Actions in a dungeon…
• Progress with an encounter
• Flee
• Actions in the wilderness…
• Move towards destination (be it a city, a dungeon or an NPC/party)
• (hmm maybe I need to consider party vs party fight simulation)
• Actions anywhere…
• Set destination (dungeon, city, party, etc)
• Decision making
• Estimate quest difficulty (expected survival. Also try to see what skills are required and compare to own skills)
• Estimate quest importance (based on personality and goals. Rogues like gold, Paladins like righteous stuff, etc)
• Estimate if enough rations for a quest can be purchased
• Ability to track particular NPCs/parties (in terms of location in world map and logging)

The turn/time system will be the first in the list, as actions and AI all utilize the concept of time

NPC Party Dungeon Delving and Wilderness Encounter Simulation

As a reasonable follow-up to single NPC dungeon delving simulation, now we test parties of adventurers.

Strength in Unity, Complementary

Parties are stronger than individual NPC, but without being overpowered. The increase in power is not linear: 5 heroes working together are not 5x more effective than a single hero, but general survival rate is greatly improved. So, here are a few similarities/differences between a single NPC and a party, with regards to mechanics in the coarse simulation layer that I’m currently developing:

• Healing is applied to all members equally
• Damage is reduced based on the number (N) of party members by: $\frac{N-1}{2N}$
• N=1: 0%, N=2: 25%, N=3: 33%, etc.
• XP are divided among party members
• For skill checks: use the max value among members
• For skill category checks: for each member calculate the average of the category’s skills, and use the max among members

With these in mind, it’s clear that it’s beneficial to have parties with complementary skills.

Single-Delve Tests Revisited

Here are a few examples that show survival rate in a few party size configurations (1,2,6) when we put a party (of any level) against a dungeon of a similar level.

Here are a few examples that show survival rate when we put a lvl 1 party against a series of dungeons of a similar level to the party (dungeons progressively get stronger as party levels up). CR mod is difference of character level to dungeon level, so a crmod of -5 means a level 25 party will tackle a level 20 dungeon.

Wilderness Encounters

Adventurers start their adventures typically at cities. They travel, and travel, following well-established routes as much as possible and crossing through other cities, until finally they enter full-on wilderness to get to a dungeon to clear. So, with some quick and not-too-inaccurate math we can figure out that if $X$ is the total distance to the dungeon and $Y$ is the average city-to-city distance, then the distance off roads will be $\text{min}(Y/2,X)$. The split to off-road and on-route is important, as threat on route is significantly smaller.

The tests assume a +-20% in challenge rating compare to the character level, and they utilize the dungeon delving simulation code from the previous post, but with 1-3 encounters only.

These examples show the benefits of being in a party, and they also demonstrate that survival chances drop with larger distances (but the drop decays sharply), and they also drop with higher character levels, as more dangerous encounters get spawned.

Side-effect plot viewer

The generated graphs use several parameter sets, e.g. party size, retreat threshold, etc. It becomes difficult to navigate through the graphs when you want to flip between two arbitrary parameter values. So, instead of researching it further, I wrote a dead simple script that allows “interactive” graph rendering in the cheatiest way: it parses special parameter-encoding filenames, e.g. “wildernesssurvival_retreat2_party5.png”, and listens to keypresses. When e.g. ‘r’ is pressed, I load the file ‘wildernesssurvival_retreat3_party5.png’, while now if ‘p’ is pressed, I load ‘wildernesssurvival_retreat3_party6.png’. This is actually a lifesaver! And works with all graphs. Here’s the code in all its glory:

import glob
import sys
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

def run(img_format, params):

param_ranges = [] # for each param, a list of values
param_cur_indices = []
ax = None
fig = None

def press(event):
sys.stdout.flush()
for k in range(len(params)):
if event.key.lower() == params[k]:
off = 1 if event.key == params[k] else len(param_ranges[k])-1
param_cur_indices[k] = (param_cur_indices[k]+off) % len(param_ranges[k])
param_cur_values = [ param_ranges[i][param_cur_indices[i]] for i in range(len(params))]
ax.imshow(img)
fig.canvas.draw()

all_files = glob.glob(img_format.replace("{}","*"))
img_format_const_parts = img_format.split('{}')
for i in range(len(params)):
"""
To extract number for i-th param, locate the end of the string before and the start of the string after
"""
values = []
for j in range(len(all_files)):
fname = all_files[j].replace('\\','/')
idx_start = fname.find(img_format_const_parts[i]) + len(img_format_const_parts[i])
idx_end = fname.find(img_format_const_parts[i+1])
values.append( int(fname[idx_start:idx_end]))
values = sorted(list(set(values)))
param_ranges.append(values)

param_cur_indices = [0] * len(params)
param_cur_values = [ param_ranges[i][param_cur_indices[i]] for i in range(len(params))]

plt.close('all')
plt.ioff()

fig, ax = plt.subplots()
plt.axis('off')
fig.canvas.mpl_connect('key_press_event', press)
ax.imshow(img)
plt.show(block=True)

if __name__ == '__main__':
#params = ['c','e','r']

#params = ['c','e','r','p']

img_format = 'C:/Users/Babis/Documents/repos/aot/build/apps/aot_v0/wildernesssurvival_retreat{}_party{}.png'
params = ['r','p']
run(img_format, params)

Single NPC Dungeon Delving Simulation

Previous week was simulation of level progression of NPC adventurers. The logical continuation is to pit such generated characters against dungeons, and run simulations for the outcome.

Dungeons: A Series of Challenges

The Tomb Of Horrors.

The Haunted Graveyard.

The Forgotten Crypt.

The Lair of the Werewolf King.

All these are locations where adventures take place, and typically adventurers slay lots of monsters and acquire treasure and artifacts. Since we’re still at a high level of abstraction, instead of creating dungeons and placing monsters, we can simulate the outcome in a simpler way, as a series of challenges:

Adventurer walks in, faces a skill test (e.g. how good is the two-handed skill) or a skill category test (e.g. how good are the combat skills on average), takes damage based on the test result (which is a scalar rather than a bool) and heals a bit. If the test passes, adventurer gains XP and proceeds to next challenge.

A dungeon is configured for a such coarse simulations as follows:

• Number of challenges: How many challenges should adventurers succeed in to complete the dungeon.
• Challenge rating:  The difficulty of the dungeon, in terms of character level.
• Skill Challenge Pool: The skills that can be tested against, if the challenge is skill-based.
• Skill Category Challenge Pool: The skill categories that can be tested against, if the challenge is category-based.
• Skill Challenge Chance: The chance of encountering a skill-based challenge rather than a category-based one.

The two challenge pools (skill and skill category) contain subsets of skills/categories, each with a specified DC (difficulty class) modifier as compared to the average for the CR (challenge rating) of the dungeon.  So for example a dungeon could have particularly hard lockpicking tests, or very easy combat.

Adventurers can have their personal “retreat threshold” (aka bravery), so some will flee if their health is below 20%, others when it’s below 5%, others never.

The simulation goes as follows in pseudocode:

for each encounter:
calculate challenge rating # progressively harder
calculate test mastery level base

test_type = weighted select skill or category
if test_type == category:
sample category # from the list of categories that we can test for this dungeon
run skill check against adventurer's average skill level
calculate success and apply damage
elif test_type == skill:
sample skill # from the list of skills that we can test for this dungeon
run skill check against adventurer's skill level
calculate success and apply damage

return status::Death
else:
if success:
else:
flag encounter for retry

return status::Retreat

return status::Success

For simulation purposes, mana acts as a “mana shield”; when mana is available it can be utilized to block off damage at half effectiveness, e.g. at 100 damage, 48 mana left => 24 damage absorbed, mana goes to zero, adventurer takes 76 damage.

I developed two tests to see the simulation in action, single-delve and lifetime-delve

Single-delve tests

These tests take single adventurers and put them against a single dungeon. Run enough tests at all potential character levels, and we can get an idea of survivability rates at different levels. All characters are generated using the level-up strategies from the previous post.  Below are a few graphs that show the success/retreat/death per adventurer level by varying the general cautiousness of adventurers (CR mod), the number of challenges of the dungeon and their retreat threshold.

Here is a GIF with all graphs, to avoid flooding the page, as there are many many combinations (first retreat value varies, then challenges, then CR mod):

These tests take single adventurers, starting from level 1 and put them continuously against dungeons until they die or reach level 30. The adventurers pick a dungeon level compared to their level, using a CR modifier (-5 is easier dungeons, up to 0, as above is suicide given the previous graphs). Here is a GIF again with all graphs, much less data this time, so easier to follow: ( retreat varies first, then challenges)

Next time, party time

Clearly the survivability rates are not great, especially at higher levels. So, as it is natural, parties can and will form, as there is strength in unity. The party simulation will not be too complicated, and should give a reasonable boost to survivability esp. at higher levels.

Finally there’s another wild idea. These simulation results can be exported to JSON, so that when AI has to make choices about which dungeons to tackle, it will use the graph results. The more the AI knows about a dungeon (CR, encounter num, etc), the more accurate the survivability percentage it will be, utilizing rumors, dungeon lore skills, etc. So, it can make a more informed decision.

Another fun idea is to try to use something like tracery (or a home-brewed adaptation) to generate “adventure stories”.

After writing down the extensive catalog of attributes and skills, it’s only natural that we have to create several characters to test things out. Characters (adventurers in this case) can be grouped in terms of their general capabilities and function; in many RPG games this would be a character class. In Age of Transcendence, there are no character classes; NPCs and players develop their skills as they see fit. In this model, “classes” are just suggestions on how a character may develop. Below, I call the classes “archetypes” and the development suggestions “level up strategies”.

An archetype is a configuration to build an infinite set of similar (in some aspects) adventurers. The parameters at the moment are:

• Race list
• Age range
• Alignment list
• Starting level range
• Level-up strategy list

When creating a character from an archetype, we choose a race from the race list, sample an age from the range, sample alignment from the list, sample a level from the range and choose a level up strategy. The interesting and complicated bit is the level-up strategy.

Level-Up Strategy

The level-up strategy is the configuration that the game logic uses to develop characters differently through the levels. For example, a level-up strategy for a fighter would focus on mostly improving strength, and focusing on skills such as body-building, heavy armor and weapon masteries, while one for a thief would focus on agility, daggers and/or bows and stealth skills.

The approach that I’m using is a mix of coarse and fine granularity weights, and it consists of:

• Attribute improvement weights: a weight value per attribute, so that when we want to allocate attribute points, we do weighted sampling.
• Well-rounded-ness: This is a scalar specifying how balanced a character will be. A balanced character will improve many attributes and skills, while an unbalanced will be more of a savant type, focusing heavily on a few skills, ignoring most others.
• Skill focus: A list of tuples (skill name, target mastery level, allow surpassing target mastery level).
• Skill category weights: A list of weights, one per skill category.

When we level up, we first allocate the unspent attribute points based on their respective weights.

Immediately after, we allocate a percentage of the unallocated skill points (based on wellroundedness: savants use more) to improve skills in the focus list, until we reach a target mastery level. When we reach the mastery level, we either never touch the skill again, or if we allow surpassing the target mastery level, we still consider it for advancement as explained below.

After we allocate focus skills, we have a remainder of skill points. These will be allocated to the rest of the skills, excluding the focus skills that have reached the target mastery and can’t improve. The weight for each of these remainder skills is the product of a) the skill’s category weight, b) the number of skills in the category and c) the distance of the skill’s value to the required skill value of the maximum mastery we can achieve with the current attributes. (note: (b) looks odd but it’s useful, as if we say that Offence (with about 10 skills) is as important as Adventuring (3 skills), it’s 3 times more likely that a skill in Adventuring gets a point)

There’s an extra important consideration. Some skills form subcategories, such as all “Weapon style”skills forming the “Weapon Style” subcategory, all “Melee weapon mastery” ones, etc. In these cases it’s more typical that a character develops one or two more than others, rather than equally developing the whole selection. For this reason, I

Finally, well-roundedness is used every time we do weighted sampling by replacing the weights with “w = pow(w,2-well_roundedness)”. It’s also used for the skill subcategories in the same way.

Below are some example level-up progression results using matplotlib. Title shows some info (Yes, some is bonkers, like neutral paladins). Y axis is the skill value, with grid lines at mastery levels (30 is Master, 50 is Grandmaster). X axis shows attributes (first 5) and skills. Darker colors show values at earlier levels, while lighter colors show values at later levels, as explained in the legend. The images are large, so you might need to open them in a separate window, or zoom.

Still here? Well, there are videos of these progressions too 🙂

Here still? Here’s my spreadsheet with the draft level-up strategy configuration. Well-roundedness seems to have an inverse effect, so I’d say that the graphs were helpful in noticing that 🙂

 Archetype Name FighterX FighterS Fighter Thief Mage Bard Ranger Paladin STR 4 4 4 1 1 1 2 3 AGI 2.5 2.5 2.5 3 1.5 2 3 1 INT 1 1 1 2 4 2 2 1 PER 1.5 1.5 1.5 3 2.5 2 2 2 CHA 1 1 1 1 1 3 1 3 [total] 10 10 10 10 10 10 10 10 Body/Mind 5 5 5 5 5 5 5 5 Offense 6 6 6 3 1 2 3 5 Defense 6 6 6 2 3 2 3 5 Stealth 0 0 0 5 0 2 3 0 Lore 2 2 2 2 5 5 3 1 Perception 1 1 1 4 1 2 3 3 Crafts 2 2 2 0 3 0 1 1 Magic 0 0 0 0 5 1 0 2 Social 1 1 1 2 1 5 1 2 Adventure 2 2 2 2 1 1 3 1 [total] 25 25 25 25 25 25 25 25 Well rounded 0 1 0.5 0.5 0.5 0.5 0.5 0.5 Athletics Expert+ Expert+ Expert+ Fortitude Expert+ Expert+ Expert+ Adept+ Reflexes Expert+ Adept+ Willpower Expert+ Concentration Expert+ Body building Expert+ Expert+ Expert+ Adept+ Meditation Novice Novice Novice Expert+ Weapon style [two-handed] Grandmaster Master Master Weapon style [one-handed] Grandmaster Weapon style [dual-wielding Expert+ Weapon style [ranged] Master Melee weapon mastery [blunt] Melee weapon mastery [slashing] Master Melee weapon mastery [daggers] Master Melee weapon mastery [polearms] Expert+ Ranged weapon mastery [bows/crossbows] Master Ranged weapon mastery [slings/blowpipes] Ranged weapon mastery [thrown] Armor [light] Master Expert Armor Armor [heavy] Master Master Master Master Shield mastery Master Sleight of hand Expert+ Hide Master Lockpicking Grandmaster Move silently Master Item lore Expert+ Creature lore Expert+ Expert History and legends Expert+ Adept Dungeon lore Expert+ Adept Arcane lore Expert+ Literacy Expert+ Adept Detect traps Expert Spot Expert Listen Expert Sixth sense Disarm traps Expert Repair Cooking Make weapons Make armor Make accessories and utility Enchant item Expert+ Alchemy Expert+ Wand mastery Expert+ Staff mastery Expert+ Magic school mastery [command] Magic school mastery [alteration] Magic school mastery [divination] Magic school mastery [creation] Magic school mastery [destruction] Master Leadership Expert Master Persuasion Master Master Haggling Renown Master Perform Grandmaster Scouting Master Survival Grandmaster Luck

(Formatting (bold, colors) is not copied over unfortunately, I’ll update this if I find out how)