Plethora - Neural Network

11/2017 – 2/2018

Introduction

I created PlethoraNN to learn more about Neural Networks, and to run some interesting experiments with them.

Background

When I first learned about neural networks, and saw the interesting and fun (yet small) experiments that people were doing with them, I knew that I wanted to learn more about them and experiment with them.

Similar to my 3D engine, when I first started looking into programming a neural network, I just wanted a simple (yet expandable) network that I could understand, program and easily use. Because of this, I didn’t want to use the neural network libraries that existed at the time, and because I did not know calculus or linear matrix algebra, I could not understand most explanations I found online. Despite this, I knew that there must be some form of algebraically expressing the math behind neural networks (even if it meant using complex loops and functions); after all, computers only understand basic mathematical operations, so by the fact that people had programmed neural networks, there must be a way of representing them in notation that I could understand. Eventually, I persisted long enough and found this article that broke down the math into steps that I could understand, and then extrapolate to more complex networks:
After reading though this article several times, writing down the formulas, and lots of trial and error, I was eventually able to write code for my own neural network. At this point, I started working on several projects with my neural network, some of which I have listed below. After this, I continued researching other types of machine learning such as convolutional neural networks, recurrent neural networks, and Q-learning; however, these experiments were generally less successful, and I ended up pausing work on this to learning the math needed to better understand machine learning. This being said, I do plan to update this page with more information about these other projects at some point soon.

Text to Images

The first real project I used my neural network on was converting text to images. This project was inspired by remembering the general idea of how my computer science teacher had explained abstract classes an interfaces to the class:
For example, we could make a vehicle interface, and implement it in a variety of different classes. We could have cars, planes, boats, rockets etc. For any of these classes, we can find out specific information about it. For example, cars might have information about how many miles per gallon they get. However, what happens if I ask the question the other way around: what is a vehicle? The correct answer is: I don’t know. It could be anything! I could go around this room, and ask everyone to describe a vehicle and everyone could give different correct answers.
Knowing that neural networks are essentially incredibly good pattern recognition devices, I was curious what would happen if I posed this latter question to them. After all, not only are humans able to create these categories, we also can think of generic representations for them. For example, I could ask you to draw a house, a car, a tree or many other things without specifying exactly which house, which car, which tree etc., and you would be able to draw something that represents that general category. By asking a neural network to draw one of these generic categories, it would remove the randomization that the human element adds to this, and should (in theory) be able to generate an image with the defining characteristics of the category.
To attempt this, I knew that I needed a variety of images that showed items in different categories. Ideally, these images would also be small to speed up training, and would also be fairly consistent (the same sizes, similar object placement, would only show the object in question, etc.). The first thing that stood out to me to fulfil these requirements were the textures for Minecraft. These textures are very consistent, they are small, they contain several categories, and are still very distinctive at small resolutions. At the same time, I was working on this, Mojang was working on changing Minecraft’s textures, and were posting the game’s textures on their website for anyone to download. Conveniently, these textures also had a standardized naming scheme. For example, the textures for saplings all start with sapling. By giving the neural network an input based off of the texture’s name, and then converting the output into an image, I was able to train the neural network to generate textures. While I did not expect this to be incredibly successful, I ended up getting several interesting results—some of which clearly show characteristics of the categories they belong to. I’ve included some of these images/results in the carousel above.

Other Experiments/Projects will be added!


Specifications

Language(s):Java
Features/About:Neural Networks