Weekly Update – February 4, 2023

Architecture and code design for history procedural generation is 75% complete. History generation is turning out to be a feature that seemed simple at an abstract level but has proven to be very difficult at a concrete level. It wasn’t part of the original vision, but I had considered it early on and chosen to exclude it because I didn’t think it was necessary. My opinion only recently changed, when I realized that dungeon stocking needed to be more contextual. Now history generation is a must-have, though I’m still uncertain about how well it will work in practice.

Next week, I’ll start history generation coding.

Weekly Update – January 20, 2023

  • Map generation optimization. I knew as I wrote Map Generator 2.0 that some of the code was horribly slow and wasteful and would need to be optimized later. Map generation had ballooned to 10-15 seconds and .5 GB of memory. It’s now down to 3-5 seconds and 200 MB of memory, and there’s much more room for improvement. The optimization techniques were converting LINQ statements to for loops and reducing use of temporary lists. One optimization example involved how connections between rooms are stored. I started with one list that stored all original connections between rooms. Then I created a new list for connections that constructed loops and another new list for connections that joined sections. I used separate lists instead of the original list because I needed to do different things with the items in these lists, and it was more expedient to create new lists (though a little voice inside my head was telling me to slow down and do it the right way). I added a fourth list when I realized I needed to track each connection in each room that used the connection (as opposed to only the room that originated the connection). Because it was sometimes necessary to get all of the connections, I created a property that combined all four lists into one new list. Yikes. The allocations… The solution was to combine the lists into one and add an attribute indicating the type of connection. This caused way more rework, and troubleshooting issues caused by the rework, than I anticipated. At least the rework made the code simpler and easier to understand, which is always beneficial.
  • Movement optimization. Enabling actor actions to be displayed simultaneously exposed a problem: the movement code took a long time to run, causing actors to instantly move to the next cell rather than moving incrementally over multiple frames. Linear interpolation is used to calculate how far an actor moves each frame, with the actor’s movement speed and elapsed time since the last update as inputs. I ran the Unity profiler and identified the main causes: dynamic lighting and excessive Unity log calls. The log calls are easy enough to deal with; they won’t be in production releases. Dynamic lighting, which uses the Smart Lighting 2D asset, is a dilemma. I want to keep it in the game but I’m not sure how much I can optimize it. Temporarily disabling the lighting and logging fixed movement between two cells, but there was still an issue when moving across multiple cells. Actors momentarily stopped at each new cell before moving again. I had seen this before and knew the source: the state logic in the Update method caused some frames to skip movement. For example, an Update call would determine that all actions had finished in the previous Update and it would update the turn state. Movement wouldn’t resume until the next Update. With nested state logic (there are turn phases, actions phases, and action step phases), several frames passed before movement resumed. This was resolved by modifying the state logic to process state changes in the same update when applicable. For example, when an action step finishes, the logic will start the next action step in the same update.
  • Displaying actor actions simultaneously. I reverted the changes I made last week to present actor actions simultaneously. It became clear that an enormous amount of rework was needed to separate action logic and presentation. Fortunately, a much simpler solution had been right in front of me the whole time: asynchronous actions. Instead of waiting for each action to finish being presented, I’d simply start off each action at the same time. I didn’t consider this initially because one actor’s actions can affect another; I believed that all actors’ actions had to be resolved before any of them could be presented. For example, if the player hits an enemy, and that enemy dies, the enemy shouldn’t be able to move. I still had to make some modifications to get this working, such as checking that an actor is still alive, and tracking cells that actors are moving to before they reach the cell (so that other actors don’t attempt to move into the same cell).
  • Pathfinding improvement. Over time, I’ve changed my mind on how actors interact with other actors and objects that are diagonally adjacent. I may change my mind again, but what’s certain is that there needs to be a way to allow interactions with adjacent objects in ordinal or cardinal directions, depending on the object. Currently, a melee attack can be performed from an adjacent diagonal cell, but opening a door cannot. Until this week, the latter was not possible because of a limitation in the pathfinding code – since actors can move diagonally, and the pathfinding code finds the shortest route, paths end at a cell diagonal to the cell containing the object being interacted with. The fix for this was to change the path destination based on the interaction range of the object. An object with a range of 1 can only be interacted with if the actor is adjacent to the object in a cardinal direction. 
  • Better debugging statements. It just occurred to me that I’ve written a lot of bad debugging statements. I typically add debugging statements while troubleshooting a particular issue. They make sense when working within the context of an issue, but not on their own. Without context, they do more harm than good because they increase cognitive load, which is already high from being in troubleshooting mode. I improved these statements by adding more relevant state information to them. I also rearranged the statement in some cases so that the subject of the statement (actor, item, etc.) was at the beginning of the statement. This made it easier to skim through the debug log.
  • Inspector improvements for ScriptableObjects using Odin. To reap the full benefit of Odin, I added Odin attributes to all classes inheriting from ScriptableObject. These objects are now easier to view and edit in the Unity Inspector.
  • Duplicate door bug fix. Doors recently stopped opening when they were clicked. Actually, most doors didn’t open but a few did. I reviewed the pertinent code but couldn’t find a problem. I started a game and right-clicked on a door to open the Inspect Panel, which shows everything in the cell. Nothing appeared to be out of the ordinary, and the door opened when I clicked it. Then I clicked another door. This one didn’t open. I opened the Inspect Panel and found the problem: there were two doors on the cell. It turns out that the recent change to track connections between rooms in both rooms caused most doors to be added twice. The fix was trivial; I just had to exclude door creation on the duplicate connections.

Next week, I’ll further optimize map generation. Possibly, I’ll start coding the procedural history generation, which I’ve been slowly designing over the past month.

Weekly Update – December 30, 2022

At the close of 2022, three of the four Map Generation 2.0 objectives have been completed:

  1. Structuring – layout of walls and floors in rooms, corridors, caverns, and other shapes (done)
  2. Sections – map partitioned into separate areas with discrete structures, themes, and content (done)
  3. Data-driven stocking – replace the existing hardcoded dungeon stocking with a data-driven implementation (done)
  4. Node pattern-based stocking – identify the best locations on the map to place specific types of content using node patterns on the map graph (in progress)

There’s also an aspirational fifth objective, which is to use generated background stories for each map to select, place, and customize content.

I’ve been working on map generation exclusively for the past two months. It’s been fun and challenging but I’m starting to feel burned out on it. I need to complete objective 4 and switch over to something else.

Accomplishments this week:

  • New Room Type Map Elements: Barracks, Bedchamber, Bone Pile, Corpse
  • New Objects: Beds (Plain, Dirty, Fancy), Prison Door, Blood Fountain
  • Data-driven Map Elements. Map Elements, the procedural generation objects used to stock the dungeon, are now data-driven. Previously, a class was created for each Map Element. For the most part, Map Elements define what objects go in a room, and where those objects are placed. So, a separate class was needed for each room type. Now, new room types can be created from the Unity Inspector. With this new capability, I was able to quickly recreate the class-based Map Elements and add some new ones. Odin has been an incredible tool for this.
  • Expanded object placement capabilities. Objects can now be placed in grids with random element sizes and rows and columns. Objects can now also be placed in clusters, which is useful for objects that typically appear next to each other such as barrels. Parameters have been added to some existing placement patterns for more flexibility. For example, corners can now be offset relative to the edge or the center of the room. Placed objects can now be grouped, with conditions for placing groups (such as minimum room size). Groups can be configured to place all of the objects they contain or a single, randomly selected object. These improvements provide many more ways to populate rooms in the dungeon.
  • Section-based structure. Map sections can now have distinct structures. For example, catacombs sections have longer corridors and smaller rooms.   
  • Map generation performance improvement. The recent additions and changes to map generation really slowed it down. A quick and impactful fix was adding some caching. There’s more work to be done here, but the current performance is tolerable again.

Next week, I’ll start on the node pattern-based dungeon stocking. I believe the design work is done; I’ve cataloged many patterns using the graphs generated during testing. Some pattern recognition already exists, as it was implemented before I had the big picture. For instance, sequences of rooms and required rooms are identified during generation.

Weekly Update – June 3, 2022

In my relentless pursuit of increasing software development productivity, I started the week off pondering what is slowing me down the most. I kept coming back to aspects of object-oriented programming – encapsulation, abstraction, inheritance/composition, polymorphism. OOP has always been a double-edged sword for me, providing both solutions and problems. Certainly some of my issues are the result of my shortcomings as a developer, but I believe there are inherent shortcomings in OOP as well. A frequent challenge is determining where things belong, and a frequent source of bugs is putting things in the wrong place. I began questioning whether data and functionality belonged together in the same class (I was quite deep into the rabbit hole at this point) and if I could reduce complexity by separating the two. I also considered making data and functionality, once separated, completely public (I know, OOP heresy) and using either immutable or versioned data. I googled these ideas to see what already existed and found something very close: Data-Oriented Programming (DOP). Now, it would be impractical to go back and rewrite 2+ years of code using a DOP paradigm. But, I’m going to experiment with it for some of the new code I’m writing (see the AI example below). 

  • AI Overhaul part 2. I thought I was done with AI rework after last week, but I put even more time into it this week. To make the new composition-based AI configurable in the Unity editor, I added AIType classes (implementing the Type Object pattern). inheriting from ScriptableObject, I also made the pluggable components of AIType, such as the observation and action deciders, ScriptableObjects. The legacy AI classes were gutted and consolidated. AI state data was moved into a separate generic data structure (see below) and AI functionality was moved into the AIType classes. I added general AI behaviors such as offense and flee, and mapped actions to the behaviors. This simplifies the action decider code because only the behavior has to be specified; the behavior class will return all of the applicable actions to the action decider. With these improvements, I can assemble AI’s in the Unity editor, provided that the pluggable components have been written. I may need to move to data-driven behavior trees if the AI logic becomes too complicated, but for now I’ll stick with conditional statements.
  • Generic Data Structure. To support my data-oriented programming experiment, I created a class to act as a general-purpose data container. It’s essentially a map data structure, but contains three dictionaries to store values of different types (bools, ints, and objects). It’s not sophisticated but it works. I’m now using this to store AI state data, which varies by AI type. The syntax for accessing data within the structure is more cumbersome than individually defined variables, but that drawback is outweighed by flexibility and ease of serialization/deserialization. I also like that the syntax makes it obvious which variables are part of the state.

Next week’s goals are the same as last week’s goals: add the vampire and 1-2 more enemies to test the new AI, and add a few new abilities.

Weekly Update – May 28, 2022

Legend

Website | Twitter | Youtube

I started to add a new enemy this week, vampires. This revealed a problem with my AI framework. Vampires have the same move and attack behavior of a normal enemy, but they have some additional behaviors as well. For instance, they can change into a bat and will use that ability when their health is low to temporarily flee and regenerate health. Prior to vampires, the AI framework worked fine. Each time I needed to give an actor a different behavior, I’d simply add a new AI class. I had a single enemy AI class, a neutral NPC class, and a few classes for actors that do something but aren’t sentient, like fire and gas. This worked because the logic for each class was completely different. The vampire AI revealed a problem because it needed some of the standard enemy behaviors and its own unique behaviors. I spent a couple of days thinking about what to do about this. The solution came to me when I identified the pieces of the AI class that needed to change with each enemy: choosing which actors to track, choosing which observations to react to, and choosing an action from a list of potential actions. I defined interfaces for each of these and created standard enemy and vampire implementations. I extracted shared logic, such as determining all of the potential attacks an enemy has, into new classes so that the logic could be reused. I reduced the enemy AI class to the logic that was applicable to all enemies, which was mainly state management. I can now easily add new enemy behaviors without having to replicate code.

Rework can be a discouraging exercise, especially this far into a project. It doesn’t add anything to the game from a player standpoint. It doesn’t concretely move the project closer to the finish line, but there’s an expectation that it will save time in the long run. It can feed self-doubt (if I had written good code the first time around, I wouldn’t have needed to rework it). There’s a risk of over-engineering or building capabilities that you’ll never need. In this case, I almost scrapped my entire AI framework and considered implementing it using a Unity asset, Opsive Behavior Designer. I actually bought the asset and read the documentation. It seems like a great tool that provides a visual designer for AI behavior trees. It also supports utility AI within a behavior tree, which is essentially what my AI framework is doing in code currently. However, I decided to rework my existing code instead because it took less time to do.

With the AI rework filling up the week, the vampire wasn’t completed. I should be able to easily finish adding it next week. I’ll add one or two more enemies to test out the reworked framework. I will do the same with abilities, which went through a similar process recently of having to be reworked to support new types.

Weekly Update – May 14, 2022

Momentum is picking up after a couple of slow weeks puzzling out the abilities architecture. With the architecture determined, I was able to implement the ability that triggered the architecture exercise in the first place, Heavy Strike. This ability is a melee attack that does more damage, uses a different animation and sound effect, and shakes the screen more. I wanted to be able to reuse the existing melee attack action for this ability. The solution was to add more parameters to the melee attack action and set the parameters from the Heavy Strike ability, which is a Unity ScriptableObject.

After finishing Heavy Strike, I started working on another ability, Charge. This ability performs two actions in a sequence, moving the player and attacking a target. It’s one of many abilities that perform multiple actions. This presented a new dilemma – actions were designed to be executed once per actor per game turn. The rework required to enable multiple actions per actor per turn was significant. I found a better alternative: Action Steps. Action Steps are now the basic building blocks of actions. They enable a series of actions to be performed within a single game turn action and make creating new actions and abilities a lot easier. Creating the Action Steps involves extracting code from existing actions. This is in progress. So far, Action Steps have been implemented for selecting a cell and shooting a projectile.

To control the execution sequence of Action Steps, I introduced another new object, Action Phases. Action Phases define the sequence in which Action Steps are performed. Each Action Phase contains one or more Action Steps. Action Steps within the same Action Phase are performed concurrently. This allows Action Steps to be performed sequentially, in parallel, or with a combination of the two. Some actions, such as pushing and pulling, require parallel execution of Action Steps. 

Next week, I’ll build a few more Action Steps (moving and attacking), which will allow the Charge ability to be created. I should be able to quickly add some more abilities. I’ll also post the pixel artist ad in a few new places.

Weekly Update – May 7, 2022

It was an ok week. I had the same issue that I had the previous week – nothing got done during the weekdays.

  • Abilities architecture overthinking. I’m spending way too much time figuring out the architecture for abilities. I have a vision for a clean, highly-extensible, Unity editor-driven solution but I haven’t been able to get there. This weekend I’m forcing myself to finish this and move on. There are plenty of ways for me to accomplish this goal that aren’t perfect but are workable.
  • Saving/Loading working again. Saving/loading has been a pain point throughout development. Many changes break it, causing me to wonder if I’m going about it in the correct manner. Since I don’t often test saving/loading, when I do test, there are always a few issues to fix, and these issues are usually not simple fixes. Testing saving/loading every time I make a change would be too cumbersome; maybe I’ll add automated testing of this feature in the future. Anyway, saving/loading is working again. Since I’m reworking the code less these days, I expect to have fewer saving/loading issues in the future.
  • Rework – cell selection support in actions. Many actions (melee attack, ranged attack, open, take, drink, etc.) require a target. In the main game context, the target is determined by where the player clicks. In other contexts, such as clicking an item in the hotbar, the target is determined by prompting the player to select a cell. Early on, I built the cell selection handling into the ranged attack action. I knew I’d have to eventually extract it but I wanted to get ranged attacks working quickly and I wasn’t clear on all the other potential uses of cell selection. This week I moved the cell selection code into its own class and made it easy to use from any action. 
  • Fixed some non-fatal, recurring exceptions. I got tired of seeing the same errors in the logs on every playthrough. They didn’t actually break anything in the game, but they cluttered the log. I removed the several most common errors.
  • Posted pixel artist ad on r/GameDevClassifieds. I still need to post on Pixel Joint and 1-2 other places.
  • Wrote the first half of a post on time-tracking. Last week I stated that I was going to post the findings from meticulously tracking my game dev time over the past four weeks. There are some interesting observations but I realized that I need a larger sample size for some of those observations to be meaningful. I’m going to collect another four weeks of data and reassess if I have enough for a post.

Next week, there will definitely be some new abilities in the game.

Weekly Update – November 20, 2021

I put limited time into game dev this week, and a large portion of the time went into some difficult code design/architecture thinking rather than coding itself.

  • New Map Elements: Library, Alchemy Chamber, Armory. These are new room types that contain objects arranged in simple patterns (tables, bookcases, weapon racks) and applicable random assortments of items. I want to improve the variety of the object placement patterns. I sketched out some patterns for the library (below) to visualize the end result and work backwards to develop the generation logic. It got me wondering about the feasibility of an alternative approach: training a machine learning algorithm to generate patterns from a set of example patterns. I’m going to research this next week.
Library patterns
  • AI 2.0 Design. Last week’s addition of AI states and actor responses to game events has necessitated more rework than anticipated. The original AI was player-centric; other actors only cared about what the player was doing. The end goal was always to allow actors to respond to a variety of events, but I limited the initial implementation to player events for simplicity. I’m now modifying the design so that actors can potentially act on any event. This requires some optimization as well because, for each event, a check needs to be performed to determine if the actor notices the event. It’s further complicated by the fact that each event may be detected by sight or hearing.

Next week, I’m continuing working on the AI 2.0 coding.

Weekly Update – July 16, 2021

Title Screen

One Release 3 feature completed this week: the Title Screen! It will likely completely change after I bring an artist onboard, but it gets the job done.

Title Screen

Finished Actor Refactoring

I dug myself into a deep hole last week by refactoring actors. Every unit test failed and there were a couple hundred compiler issues to fix. I’ve mostly climbed my way out of the hole since, and I think the actor architecture is solid enough now to get to the finish line. I now have:

  • Actors
    • The main actor class is a plain C# class for all actors. It contains all actor state and is therefore the only class involved in actor saving and loading.
  • Actor Types
    • A GameObject prefab is defined for each Actor Type. These prefabs are loaded into memory when the game starts. They use composition in a limited manner, typically having only Transform, SpriteRenderer, and Animator components. When a new actor is created, the corresponding Actor Type GameObject is instantiated and associated with the actor. 
    • A ScriptableObject prefab is defined for each actor type’s definition data. Composition is employed here as well, though it is not supported “out of the box” by Unity, at least not in the way I’m using it. The technique is to add a field to the ScriptableObject that is a parent class or interface, and create custom editors to enable an inherited class (or implementing class in the case of interfaces) to be selected from a dropdown. Reflection is used to get all of the subclasses and populate the dropdown. When an actor is created in the game, Activator.CreateInstance is used to instantiate the class. This allows me to define an actor’s AI and abilities, for example, in the editor instead of in code.

This isn’t an elegant solution, but it addresses the things that were bothering me about the previous architecture, namely redundant type data in each actor instance, having to use MonoBehaviours or ScriptableObjects for composition but not being able to easily save/load component state data, inadequate information hiding, circular dependencies, and unclear division of responsibilities between the different classes comprising actors. The drawbacks of this solution are having to maintain two prefabs for each actor type and not doing composition the “Unity way” with MonoBehaviours.

All Unit Tests Passing, More Unit Tests Added

I’m repeating myself from previous posts, but the unit tests have been well worth the investment in time.

Next week, the plan is to finish the class selection and load game screens. There are still some things that are broken from refactoring and I need to fix those too.

Weekly Update – July 9, 2021

Feeling a bit overwhelmed this week… I had to do some major rework (again) instead of working on new features. I find myself battling with Unity again, specifically where to put things – prefabs, components/MonoBehaviours, ScriptableObjects, plain vanilla classes. When I first started using Unity a couple of years ago, I tended to write code for everything, because that’s what I was familiar with. As I gained familiarity with Unity, I pushed myself to embrace it and fully leverage the editor capabilities. However, that produced a lot of constraints, and now I’m back to relying on code more (though not as much as in the beginning). Anyway, a lot happened this week due having a couple of days off:

  • Finished the save system. I mentioned last week on reddit that I was struggling to determine the best way to code the save system in Unity. I ended up pulling all state data out of MonoBehaviours and into plain classes for each object type. Nested objects are supported as well (serialized attributes have to be explicitly declared). All objects that need to be saved are nested in the Map class, so saving the game is as simple as serializing this class. Loading is a tad more complicated because, after deserialization, game objects have to be instantiated.
  • A byproduct of the save system was changing actors and items to inherit from the same base class. There’s a lot of commonality between actors and items – they’re in-game objects, they can be damaged, have status effects, etc. I was handling this through composition. Attributes were spread across multiple MonoBehaviours. Because Monobehaviours can’t be serialized, capture/restore state code needed to be written for each component. It made more sense to move all state attributes into a single serializable class. This was a case where inheritance made more sense than composition.
  • Another byproduct was pulling health into the base class from a MonoBehaviour. Previously, the Damageable component made an actor or item damageable. This component tracked health. However, each actor and item ended up needing this component so it didn’t need to be optional. Also, this was done to address the issue in the previous bullet. This change broke a lot more than I expected, but fortunately the unit tests helped pinpoint the issues quickly.
  • Test map generator. The map generator can now generate a map with designated layout, actors, and items. This will allow a greater degree of automated testing.
  • New map generation capabilities. Map generation now fully supports different sets of parameters. These parameter sets can be statically predefined in the Unity editor as ScriptableObjects, or dynamically generated in the initial steps of the map generator, enabling additional layers of procedural generation.

Next week, the goal is to finish the load game and select class screens, and start on the hotbar time permitting.