daf@uiuc.edu, daf@illinois.edu

13:00 - 14:15 OR 1.00 pm-2.15 pm (in old fashioned time)

WF

1404 Siebel Center

**TA's:**

Tanmay Gangwani gangwan2@illinois.edu

Jiajun Lu jlu23@illinois.edu

Jason Rock jjrock2@illinois.edu

Anirud Yadav ayadav4@illinois.edu

- DAF: Mon 10-11 and Fri 2:30 - 3:30
- Jason: Mon 11-12 and Tues 1:30 - 2:30
- Jiajun: Tues 2:30 - 3:30 and Wed 5 - 6
- Anirud: Wed 1-2 and Thur 4 - 5
- Tanmay: Mon 4 - 5 and Fri 4 - 5

** DAF ** Mon - 14h00-15h00, Fri - 14h00-15h00

or swing by my office (3310 Siebel) and see if I'm busy

Evaluation is by: Homeworks and take home final.

I will shortly post a policy on collaboration and plagiarism

You should do this homework in groups of up to three; details of how to submit have been posted on piazza. You ** must ** use tensorflow.

Details and description subject to minor changes

- Obtain (or write! but this isn't required) a tensorflow code for a stacked denoising autoencoder. Train this autoencoder on the MNIST dataset. Use only the MNIST training set. You should stack at least three layers.
- We now need to determine how well this autoencoder works.
For each image in the MNIST test dataset, compute the residual error
of the autoencoder. This is the difference between the true image
and the reconstruction of that image by the autoencoder. It is an
image itself. Prepare a figure showing the mean residual error, and the
first five principal components. Each is an image. You should
preserve signs (i.e. the mean residual error may have negative as
well as positive entries). The way to show these images most
informatively is to use a mid gray value for zero, then darker
values for more negative image values and lighter values for more
positive values. The scale you choose matters. You should show
- mean and five principal components on the same gray scale for all six images, chosen so the largest absolute value over all six images is full dark or full light respectively and
- mean and five principal components on a scale where the gray scale is chosen for each image separately.

- Obtain (or write! but this isn't required) a tensorflow code for a variational autoencoder. Train this autoencoder on the MNIST dataset. Use only the MNIST training set.
- We now need to determine how well the codes produced by this
autoencoder can be interpolated.
- For 10 pairs of MNIST test images of the
**same**digit, selected at random, compute the code for each image of the pair. Now compute 7 evenly spaced linear interpolates between these codes, and decode the result into images. Prepare a figure showing this interpolate. Lay out the figure so each interpolate is a row. On the left of the row is the first test image; then the interpolate closest to it; etc; to the last test image. You should have a 10 rows and 9 columns of images. - For 10 pairs of MNIST test images of
**different**digits, selected at random, compute the code for each image of the pair. Now compute 7 evenly spaced linear interpolates between these codes, and decode the result into images. Prepare a figure showing this interpolate. Lay out the figure so each interpolate is a row. On the left of the row is the first test image; then the interpolate closest to it; etc; to the last test image. You should have a 10 rows and 9 columns of images.

- For 10 pairs of MNIST test images of the