I put limited time into game dev this week, and a large portion of the time went into some difficult code design/architecture thinking rather than coding itself.
New Map Elements: Library, Alchemy Chamber, Armory. These are new room types that contain objects arranged in simple patterns (tables, bookcases, weapon racks) and applicable random assortments of items. I want to improve the variety of the object placement patterns. I sketched out some patterns for the library (below) to visualize the end result and work backwards to develop the generation logic. It got me wondering about the feasibility of an alternative approach: training a machine learning algorithm to generate patterns from a set of example patterns. I’m going to research this next week.
AI 2.0Design. Last week’s addition of AI states and actor responses to game events has necessitated more rework than anticipated. The original AI was player-centric; other actors only cared about what the player was doing. The end goal was always to allow actors to respond to a variety of events, but I limited the initial implementation to player events for simplicity. I’m now modifying the design so that actors can potentially act on any event. This requires some optimization as well because, for each event, a check needs to be performed to determine if the actor notices the event. It’s further complicated by the fact that each event may be detected by sight or hearing.
Next week, I’m continuing working on the AI 2.0 coding.
I made great progress on improving map generation this week. I almost threw out the structure-first technique I’ve been using to generate levels in favor of a story-first approach. Basically, I wanted to create a procedurally generated backstory for each level and generate the structure and contents from that. That was way too difficult. Maybe in the next roguelike… Also, I wasn’t fully appreciating the advantages of structure-first generation, like efficient use of space.
Restored visual map generation and room graph generation. These features stopped working after the last major refactoring because I modified the startup process. Now that my attention is back on map generation, I needed to get them working again. It has harder to get these working again than I anticipated because the startup logic is complicated. The game is initiated by events in the Title, Class Select, and Game scene. Multiple Unity GameObjects perform initialization in the Game scene. There are multiple parameters that drive different paths through the initialization – whether visual map generation is on, where a screenshot of the entire map is captured after generation completes, whether a map is being generated or loaded. I didn’t make real progress until I put the main methods and events down on paper. The logic filled the entire sheet. Being able to see all of it at a glance made the required fixes, and simplifications, obvious. The main reason I needed to do this was that the map image was disappearing before the screen capture completed. Unity’s main screenshot method doesn’t immediately capture the screenshot. I ended up using the CaptureScreenshotAsTexture method instead and moving the code to an earlier point in the initialization.
Started on Map Elements. “Map Element” is the term I’m using to describe the elements that the map is populated with after the structure is generated. These can be simple objects and enemies, events, room decorators, puzzles, etc. Each Map Element has its own mini-PCG for variation, and constraints defining where it can be placed. The first Map Element I created was “Challenge Reward.” This Map Element finds a two-room sequence and places a difficult enemy in one room and an item in the adjacent room.
Mandatory room identification. The level generation can now determine which rooms must be traversed to complete the level, regardless of the path taken. This is useful for placing dependent Map Elements, such as a locked door and key. To identify the mandatory rooms, first a depth-first search is done on the room graph to construct all possible paths from the starting room to the ending room. Then, the rooms that exist in each path are identified. In the end, the solution was straightforward, but I spent a lot of time getting to that solution.
Improved room graphs. Room graphs now show mandatory rooms and linear sequences of rooms starting or ending with a dead-end (good candidates for placing Map Elements).
Next week, I’m going to build some more Map Elements and inject more randomization into level structure (varying room counts, sizes, etc.).
I continued to work on combat this week, but shifted from core mechanics to “game feel.” After adding particle effects and sound effects, combat is much more satisfying. The game is getting closer to being fun. 🙂
This Week’s Achievements
Combat particle effects. I’ve never worked with particle effects. There’s a learning curve with Unity’s particle effects system, but being able to change settings in the editor and see the effect in real-time helped immensely. Also, I accelerated by learning my buying a particle effect package in the Unity asset store and studying how it worked. I created a few particle effects for when an actor/object is hit with a weapon. The target’s physical material determines the particle effect. For example, when a rat is hit it will spray blood, while a skeleton archer will spray bones and bone fragments. Additionally, the size and number of particles vary based on the amount of damage inflicted.
Improved combat sound effects. I was annoyed with the combat sound effects I chose. They were so quiet and boring. I was going to have to either find better assets, increase the volumes of the assets I had, or use the Unity audio mixer to get better sound. I was also running into an issue where the direction the sound was coming from was wrong. But, that issue ended up being a blessing in disguise because it made me realize why I was having low volume issues: the audio listener was attached to the main camera, which was way up above the player. When I attached the listener to the player, all the volume issues went away (though I haven’t fixed the directional issue yet).
Added hooks for additional sound effects,including dying, taking an object, ambient sound, footsteps, and walking on different types of terrain. Footsteps are challenging. I lowered the volume and slightly randomized the pitch of each step to make them less prominent and repetitive.
Right-clicking does something. Right-clicking on an object will display the Inspect Panel for that object. I should’ve done this a long time ago, but it was overlooked because the original target platform was mobile rather than PC.
Mapping the number keys to hotbar slots. Another basic feature I overlooked – allowing items in the hotbar to be used by pressing the corresponding number key.
Added probabilities to triggered game events. For example, when a pile of bones is destroyed, there’s a 50% chance that a ghost will spawn. This provides some unpredictability and more player choice.
Framework for player notifications in the Inspect Panel. When a player inspects an object, in addition to the description of the object, I want to communicate important gameplay information. For example, if the player inspects an object that is far away, I want to inform players that they need to be standing next to the object to interact with it. There’s now a framework for collecting notifications from various sources and prominently displaying them on the Inspect Panel.
Fixed many combat bugs. There were a surprising number of things that didn’t work or caused crashes. The majority of these bugs were from the last major refactoring.
Next Week’s Goals
Next week, I’ll continue working on “game feel” and small refinements that go a long way. I’m way off track on the milestone schedule, but I feel closer to being done than I think I would have had I stuck with the planned milestones. The additional content I was previously working on wasn’t making the game any better because the core loop was lacking.
Two Release 3 features completed this week: Class Selection Screen and Continue Game Screen.
Only three classes will be available when the game is installed. Additional classes can be unlocked for a total of 16. I haven’t determined how the additional classes will be unlocked yet.
An unlimited number of games can be saved and resumed at a later time. Games are saved automatically when a new game is started and on application exit. A saved game is deleted when it is loaded to prevent save scumming.
Aside from new features, I did some more cleanup from the big actor refactoring two weeks ago.
Next week, I’m starting on the hotbar and doing a lot of design work on spells and abilities.
One Release 3 feature completed this week: the Title Screen! It will likely completely change after I bring an artist onboard, but it gets the job done.
Finished Actor Refactoring
I dug myself into a deep hole last week by refactoring actors. Every unit test failed and there were a couple hundred compiler issues to fix. I’ve mostly climbed my way out of the hole since, and I think the actor architecture is solid enough now to get to the finish line. I now have:
The main actor class is a plain C# class for all actors. It contains all actor state and is therefore the only class involved in actor saving and loading.
A GameObject prefab is defined for each Actor Type. These prefabs are loaded into memory when the game starts. They use composition in a limited manner, typically having only Transform, SpriteRenderer, and Animator components. When a new actor is created, the corresponding Actor Type GameObject is instantiated and associated with the actor.
A ScriptableObject prefab is defined for each actor type’s definition data. Composition is employed here as well, though it is not supported “out of the box” by Unity, at least not in the way I’m using it. The technique is to add a field to the ScriptableObject that is a parent class or interface, and create custom editors to enable an inherited class (or implementing class in the case of interfaces) to be selected from a dropdown. Reflection is used to get all of the subclasses and populate the dropdown. When an actor is created in the game, Activator.CreateInstance is used to instantiate the class. This allows me to define an actor’s AI and abilities, for example, in the editor instead of in code.
This isn’t an elegant solution, but it addresses the things that were bothering me about the previous architecture, namely redundant type data in each actor instance, having to use MonoBehaviours or ScriptableObjects for composition but not being able to easily save/load component state data, inadequate information hiding, circular dependencies, and unclear division of responsibilities between the different classes comprising actors. The drawbacks of this solution are having to maintain two prefabs for each actor type and not doing composition the “Unity way” with MonoBehaviours.
All Unit Tests Passing, More Unit Tests Added
I’m repeating myself from previous posts, but the unit tests have been well worth the investment in time.
Next week, the plan is to finish the class selection and load game screens. There are still some things that are broken from refactoring and I need to fix those too.
Last week I started a major refactoring to make it easier to add new features, reduce the number of bugs introduced from poorly architected code, and decrease average troubleshooting time. I made massive progress on this effort this week thanks to having three days off from work.
When I started developing Legend over a year ago, I used some source code from a Unity tutorial, most of which has since been culled or replaced. The source code included a GameManager class. I wasn’t sure what the appropriate use of this class was in Unity, and over time it grew into a 2000-line monstrosity and a textbook example of how to misuse singletons. It contains game turn management, multiple finite state machines, movement and combat logic, public references to other classes such as the map class (making the classes globally accessible), and miscellaneous utility methods. I began dismantling the GameManager class this week. It’s been a painful process but well worth the effort because it’s greatly reducing cognitive load.
I added one new feature this week: when the map is fully revealed by a “reveal map” scroll, the revealed map locations are shown with a blue tint to distinguish where the player has and hasn’t been.Here’s an example:
I moved the legendrl.com website to a new host, SiteGround, because the website was extremely slow on the former host. The site is much faster now!
Next week I’ll complete the major refactoring and squeeze in a new feature or two.
Work on the new map generator is winding down, finally! I recently added a feature to visualize the graph structure of the generated maps. This has been a huge help in analyzing the map flows and determining where to place various elements. Each time a new map is generated, an image file of the graph is created using Graphviz () and a screenshot of the map is taken. Here’s an example of a generated map and the associated graph:
Another feature of the new map generator is the ability to watch the map generate step-by-step. This feature has helped immensely in finding map generation bugs and improvements. I’ll put together a video for the next Sharing Saturday.
Next week, with the recently developed map visualization features, I’ll be fine-tuning the map generation parameters and experimenting with lock and key placement.