# Homework 4

In this homework you will train your first convnet on supertux images. You will extend the non-linear multi-layer perceptron trained in the previous assignment.

## Classifying supertux

Like assignment 3, the goal of this assignment is to classify images from supertux. You’re given a dataset of 64x64 RGB images of objects cropped from supertux. The goal of this assignment is to classify these images into 6 classes:

• Objects (0)
• Tiles (1)
• Tux (2)
• Bad guys (3)
• Bonus (4)
• Projectiles (5)

You’ll use the cross entropy loss to train your network

First, let’s look at the architecture.

### ConvNet

You’ll design your very own convolutional neural network architecture. Your network architecture should use 4 different layer types:

It is up to you to decide how many conv layers you want to use, what the kernel sizes and stridings are, where you want to pool. At some point you’ll need to flatten your output (.view(-1,number of activations)), and feed the flattened output into one or more fully connected layers. Make sure your model has output dimension $$(N, 6)$$. Pro-tip: Start with a small network that trains quickly, then slowly increase the number of channels.

### Getting Started

We provide you with starter code that loads the image dataset and the corresponding labels from a training and validation set.

The code will measure classification accuracy as you train the model. We also provide an optional tensorboard interface.

1. Define your model in models.py.
2. Train your model. e.g. python3 -m homework.train. Optionally, you can use tensorboard to visualize your training loss and accuracy. e.g python3 -m homework.train -l myRun, and in another terminal tensorboard --logdir myRun, where myRun is the log directory. Pro-tip: You can run tensorboard on the parent directory of many logs to visualize them all.
3. Test your model. e.g. python3 -m homework.test.

If your model trains slowly or you do not get the accuracy you’d like, you can increase the number training iterations in homework.train by providing an -i argument with the desired number of training iterations, e.g. -i 20000.

You should also develop your model using fewer iterations, e.g. -i 1000.

[-10.2, 4.3, 1.2, 8.7, -1.3, 2.8]