Homework 6

starter code colab notebook

In this homework, we will train a CNN to do vision-based driving in SuperTuxKart.

This assignment should be solved individually. No collaboration, sharing of solutions, or exchange of models is allowed. Please, do not directly copy existing code from anywhere other than your previous solutions, or the previous master solution. We will check assignments for duplicates. See below for more details.

We will design a simple low-level controller that acts as an auto-pilot to drive in supertuxkart. We then use this auto-pilot to train a vision based driving system. To get started, first download and install SuperTuxKart on your machine.

pip install -U PySuperTuxKart

If you encounter any issues installing this package, please post them in Piazza.

Controller

In the first part of this homework, you’ll write a low-level controller in controller.py. The controller function takes as input an aim-point and the current velocity of the car. The aim-point is a point on the center of the track 15 meters away from the kart, as shown below.

controller

In the first part of this assignment, we will use a ground truth aim-point from the simulator itself. In the second part, we remove this restriction and predict the aim-point directly from the image.

The goal of the low-level controller is to steer towards this point. The output of the low-level controller is a pystk.Action. You can specify:

Implement your controller in the control function in controller.py. You won’t need any deep learning to design this low-level controller. You may use numpy instead of pytorch for this part.

Once you finish, you could test your controller using

python -m homework.controller [TRACK_NAME] -v

You should tune the hyper-parameters of your controller. You might want to look into gradient-free optimization or exhaustive search. The reference controller completes each level relatively efficiently: zengarden and lighthouse in under 50 sec, hacienda and snowtuxpeak in under 60 sec, cornfield_crossing and scotland in under 70 sec.

Grade your controller using

python -m grader homework

Hint: Skid if the steering angle is too large

Hint: Target a constant velocity

Hint: Use the aim-point to compute the absolute steering angle, learn or tune a scaling factor between absolute and normalized steering.

Planner

In the second part, you’ll train a planner to predict the aim-points. The planner takes as input an image and outputs the aim-point in the image coordinate. Your controller then maps those aim-points to actions.

Data

Use your low-level controller to collect a training set for the planner.

python -m solution.utils zengarden lighthouse hacienda snowtuxpeak cornfield_crossing scotland

We highly recommend you limit yourself to the above training levels, adding additional training levels may create an unbalanced training set and lead to issues with the final test_grader.

This function creates a dataset of images and corresponding aim-points in drive_data. You can visualize the data using

python -m homework.visualize_data drive_data

data

Model

Implement your planner model in Planner class of planner.py. Your planner model is a torch.nn.Module that takes as input an image tensor and outputs the aiming point in image coordinates (x:0..127, y:0..95). We recommend using an encoder-decoder structure to predict a heatmap and extract the peak using a spatial argmax layer in utils.py. Complete the training code in train.py and train your model using python -m homework.train.

Vision-Based Driving

Once you completed everything, use

python -m homework.planner [TRACK_NAME] -v

to drive with your CNN planner and controller.

planner

Grading

We will grade both your controller and planner in the following 6 tracks

Your controller/planner should complete each track within a certain amount of time. You receive 5% of your grade by completing each track with your low-level controller. You receive 10% of your grade by completing each track with your image-based agent. You may train on all the above testing track.

For the last 10%, you’ll need to complete an unseen test track. We chose a relatively easy test track. You can test your solution against the grader by

python -m grader homework

Extra credit (up to 10pt)

We will run a little tournament with all submissions, the top 9 submissions will receive 10, 9, 8, … extra credit respectively. The tournament uses several unreleased test tracks.

Submission

Once you finished the assignment, create a submission bundle using

python bundle.py [YOUR UT ID]

and submit the zip file online. If you want to double-check that your zip file was properly created, you can grade it again

python -m grader [YOUR UT ID].zip

Running your assignment on google colab

You might need a GPU to train your models. You can get a free one on google colab. We provide you with a ipython notebook that can get you started on colab for each homework. Follow the instructions below to use it.


Honor code

This assignment should be solved individually.

What interaction with classmates is allowed?

What interaction is not allowed?

Ways students failed in past years (do not do this):

Installation and setup

Installing python 3

Go to https://www.python.org/downloads/ to download python 3. Alternatively, you can install a python distribution such as Anaconda. Please select python 3 (not python 2).

Installing the dependencies

Install all dependencies using

pip install -r requirements.txt

Note: On some systems, you might be required to use pip3 instead of pip for python 3.

If you’re using conda use

conda env create environment.yml

Manual installation of pytorch

Go to https://pytorch.org/get-started/locally/ then select the stable Pytorch build, your OS, package (pip if you installed python 3 directly, conda if you installed Anaconda), python version, cuda version. Run the provided command. Note that cuda is not required, you can select cuda = None if you don’t have a GPU or don’t want to do GPU training locally. We will provide instruction for doing remote GPU training on Google Colab for free.

Manual installation of the Python Imaging Library (PIL)

The easiest way to install the PIL is through pip/pip3 or conda.

pip install -U Pillow

There are a few important considerations when using PIL. First, make sure that your OS uses libjpeg-turbo and not the slower libjpeg (all modern Ubuntu versions do by default). Second, if you’re frustrated with slow image transformations in PIL use Pillow-SIMD instead:

CC="cc -mavx2" pip install -U --force-reinstall Pillow-SIMD

The CC="cc -mavx2" is only needed if your CPU supports AVX2 instructions. pip will most likely complain a bit about missing dependencies. Install them, either through conda, or your favorite package manager (apt, brew, …).