Quick Update

Added some terrain:

Also, a new discriminator that attempts to judge the traversable area with a generated structure but doesn’t really work. The generator cares not for the terrain so I’ve basically placed the structure itself in the world by hand. I’m still trying to figure out how to give the generator some terrenity.

The next steps are adding openings (windows/doors) to the generation schema and then I want to use a pathfinding algorithm to favor structures with proper doors and windows.

Latest and Greatest

I’m still set on figuring out how to generate random POIs for Voxel based games like 7D2D or Minecraft. Here’s my latest research project that will hopefully get me closer to my goal:

Github here: https://github.com/newcarrotgames/wirearchy

I’m using an extremely crude GAN-ish style of procedural generation that uses something similar to an evolutionary algorithm to build structures, and then that structure is scored by a discriminator for usefulness. The generation code has been fairly simple, but the discriminator is proving to be a bit complicated. I tend to complicate things on my own, so I’m also dealing with my own insecurities during this process.. free therapy, right? Here’s some shots of what I’ve been able to do so far:

Asking the network to generate large structures.
The discriminator used here favors structures with high resource cost (iron/stone > wood).
Just added terrain using simplex noise but then I realized matching the POI to the terrain won’t be as easy as I thought.

It “works”

The results are promising, although it’s hard to tell that at first:

Side view
Top-down view

So, despite the fact that it’s still an unrecognizable blob, it does show that this network is better suited for three dimensional data. It also shows that I have quite a lot to do if I ever want this thing to produce useful output.

The corner “wall” feature is also puzzling. I expected to see something like this though considering the training data I’ve used:

One of the cabin prefabs used for training.

Most of the training data I’ve used has the base you see there at the bottom, but judging by the generated model I think I’m messing up the orientation of the data as it goes through the system. To me it looks like the generated models are actually upside down may need to be flipped over:

Hopefully I can figure the rest of this out!

Quick Update 5/29/2020

My original assumption about generative adversarial networks was wrong, but the work I did got me down the right path. Now I have plenty of tools going forward, and I have spent the past few weeks researching GAN systems and other machine learning techniques for content generation. I finally realized that Tensorflow already had what I’ve been looking for.

The system I’ve created does a fair job considering how little I know about what’s going on under the hood of these systems, but it’s obvious that it’s not able to understand the spatial data of voxels which was expected because the system I’m using was built to work with two dimensional image data. Here’s an example of the voxel data produced by the current GAN using the slightly modified handwriting generator example:

Example output from current implementation: notice the divisions between the different “walls”

Just realized I need to fix the SSL certificate on this site… anyway, you can see that the GAN is generating 3D data but there are clear divisions between the walls of the structure because it’s really just generating an image and there’s no way for it to know that each “frame” in the image corresponds to a 3D feature. After I realized this I started trying to design a network that could work with 3D data, and from this research I learned that GANs use convolutional networks to generate content, and then after I (sort of) understood what that means I thought well.. maybe they’ve already thought of this, and, of course, they have. Hopefully, I can use the built in 3D convolutional layer that Keras already provides. The code I’m working on now is in the github repo if you want to check it out. When/If I get it to work, I’ll post the results.

Update 4/24/2020

Just to update my millions of followers (that’s a joke), I’ve realized that the GAN wants all of it’s training data to be the same size, so that means the prefabs have to be resized before they can be used for training. This does allow for some tricks to increase the amount of training data available by taking the same prefab and making it slightly smaller or larger to produce a unique image for training. Scaling the 3D array of voxels is not as easy as I thought it would be as the existing array resize methods are not built for this specific task or at least that’s what I’m telling myself so I can justify writing my own code to do this. My first attempts were failures, and I’ve found it is difficult for a visual thinker like myself to comprehend the process, so I’ve made this webgl prefab viewer so I can see exactly what my code is doing:

webgl prefab viewer

If you have the game installed it will put the prefabs in the upper right corner so you can check them out. The web app uses flask on the backend so I can use all the python code I have so far. Currently, there’s no optimizations, so the larger prefabs are slow to both load and view. I know how to fix it, but until it becomes a real blocker I’m not going to worry about it. Ok, I’m off to work on my resizing code.

7D2D GAN Project Update


That’s an in-game screen shot of the first pregab generated by the current code, and it’s ok to be confused. Let’s be honest, that’s not going to help you survive a horde anytime soon. What this really means is:

A scaled-up example of the training data. The actual images are only 28×28 pixels.

The next goal post is to convert all the available game prefabs to training data images like the one above, re-train, and see what we get. The majority of challenges remaining with this project still exist, and to be honest, I am sure this project will not generate anything that even resembles a finished product for a long time, and that’s not tensorflow’s fault. However, I am planning on taking whatever reasonable action I can to make this a reality which probably just means emailing someone who actually knows how it works. If that’s all you have to do, why the heck not?

My super long long-term fuzzy warm feeling goal is creating a website called thisprefabdoesnotexist.com which works like this one: https://www.thispersondoesnotexist.com/. Good luck to me!

I failed

I was hyper-focused on releasing CarCoder for a while, but after a work sponsored hackweek where my team built a mobile app that does real time video object detection using Tensorflow, I had another idea that I could not ignore.

In between rocket league binges, I’ve been playing another game called “7 Days To Die” which is a first person/base building/survival/open world/crafting/tower defense/post zombie apocalypse game that I think is cool because of the following reasons:

  • It uses marching cubes instead of basic cube rendering for voxel map data
  • The game world is randomly generated using a neat blend of actual random landscapes/biomes with these prefab buildings/structures called Points Of Interest (POIs)

Marching cubes is established now but it’s a way of rendering a voxel based environment in a more realistic manner than just rendering a bunch of cubes (think Minecraft)

The prefab POIs are cool because they add these playable miniquests to the random world so there’s not just the typical sandbox experience like minecraft where you don’t have any real incentive to do much until nightfall. The only problem is once you’ve played one, if you see it again somewhere else, you already know everything about it, they are static elements that are always the same no matter how many times they get regenerated into a new world.

One of the games randomly generated cities. Each building/sign is a separate prefab which is then arranged by the world generator as shown above.

The idea I had was to change this, so that we could randomly generate POIs using machine learning so the player has an almost infinite amount of content. It’s a bold strategy and it will fail short of the goal but there’s so much that can be done easily with tools that exist right now I can’t think of a good reason not to pursue this project. I’m also using this to build my overall software engineering design/architecture skills so I can at least claim the work done here won’t be a complete waste ;).

Feedback: Part 1

I’ve already gotten some feedback that mirrors what I believe the game needs and that is the ability to remove single instructions. I also need to add a confirmation before you delete the entire program. Removing the instruction is weird, but my plan is to change the red X to a trash can icon, and then if you drag the instruction to it, it deletes it. I had to work this way in the past, I just got caught up trying to change it to remove the instruction if you move it completely off the coding GUI strip.

Breaking Sad

I was considering taking a break from coding on CarCoder and pursuing some other hobby projects I have been meaning to work on, but the main reason for working on CarCoder was to see the project through from concept to a truly finished project. I read that you should start marketing a game as soon as you have something to show people so that by the time of the actual release you already have an audience.

Now, I need to research ways of marketing indie games and start getting some real feedback. When I start seeing themes in the feedback I get (like graphics are terrible, I don’t like the editor, etc) I need to develop plans to fix those issues now before it gets too difficult to change.

Another feature I just realized I need to implement is the ability for the community to add content to the game. This will start with an extremely basic way to import your own levels by creating the JSON files already used for the game’s levels and/or stages and could develop into importing actual assets (security?)