A neural network learns how to play Pong from human data

I was asked if it was possible to train a neural network to play a game in a similar style to how a human would play; this is my answer to that question. I also wanted to experiment with implementing certain subsystems that are common in games, such as entity-component-system, to understand them more clearly.

It works by collecting data from a human player, training a neural network on that data with a separate program, and using the result to control the opponent. Each training example records the position and velocity of the ball, and the position that the human controlled paddle, at a given time.

Human player (yellow) and "Goalkeeper" AI trained with 100 hidden neurons (red)

Through experimentation I found that it takes approximately 100,000 training examples, together with 50 to 100 hidden nodes, to produce acceptable behaviour.

The included trained neural network weights, goalkeeper-100.csv, is an example of the result of training. It behaves in a similar way to a goalkeeper by waiting in the middle and diving to meet the ball when it comes onto its side of the court.

The neural network is a standard feed-forward network trained by the backpropagation algorithm. Even though this works surprisingly well I believe it isn't the best solution since each feed-forward cycle doesn't consider previous datapoints. A recurrant neural network may produce a better result since they have an internal state that behaves like short-term memory.

RNNs are good at tasks such as predicting the next word in a sentence given all of the previous words, so my intuition tells me that it could also be good at predicting where the paddle should be given where the ball is and where it's going.

Through this project I learnt:

Other Projects