My original assumption about generative adversarial networks was wrong, but the work I did got me down the right path. Now I have plenty of tools going forward, and I have spent the past few weeks researching GAN systems and other machine learning techniques for content generation. I finally realized that Tensorflow already had what I’ve been looking for.
The system I’ve created does a fair job considering how little I know about what’s going on under the hood of these systems, but it’s obvious that it’s not able to understand the spatial data of voxels which was expected because the system I’m using was built to work with two dimensional image data. Here’s an example of the voxel data produced by the current GAN using the slightly modified handwriting generator example:
Just realized I need to fix the SSL certificate on this site… anyway, you can see that the GAN is generating 3D data but there are clear divisions between the walls of the structure because it’s really just generating an image and there’s no way for it to know that each “frame” in the image corresponds to a 3D feature. After I realized this I started trying to design a network that could work with 3D data, and from this research I learned that GANs use convolutional networks to generate content, and then after I (sort of) understood what that means I thought well.. maybe they’ve already thought of this, and, of course, they have. Hopefully, I can use the built in 3D convolutional layer that Keras already provides. The code I’m working on now is in the github repo if you want to check it out. When/If I get it to work, I’ll post the results.