COMP6258: Differentiable Programming and Deep Learning

2023-24


Maintained by Professor Jonathon Hare and Dr Antonia Marcu.

Contents

Welcome

Welcome to the homepage for the ECS COMP6258 Differentiable Programming and Deep Learning module.

Differentiable Programming and Deep learning has revolutionised numerous fields in recent years. We’ve witnessed improvements in everything from computer vision through speech analysis to natural language processing as a result of the advent of cheap GPGPU compute coupled with large datasets and some neat algorithms. More broadly, the idea of ‘Differentiable Programming’, in which we define entire programs as compositions of differentiable operations which can then be optimised to fit data, looks to become a new norm in how we utilise computers.

This module will look at how differentiable programming works, from theoretical foundations right through to practical implementation. We’ll study key aspects such as automatic differentiation, look at models for deep learning such as convolutional and recurrent neural networks, as well as considering current research in depth. Along the way we’ll also look at aspects of biology and neuroscience, and see how ideas from these fields feed-in to current research.

The overall aim of this module is not to teach you to be able to train pre-existing models (although you will learn to do that!), but rather to equip you with the fundamental skills to be able to understand and implement models and ideas that are currently being developed by researchers. We intend to equip you with the knowledge needed to understand new ideas as they are published, and give you the ability to constructively criticise, and identify limitations, of different approaches.

As a word of warning, this is a mathematical module: the predominant focus is on looking at models that can be optimised via gradient methods. You need to have a good grasp of linear (matrix) algebra and matrix calculus, as well as the fundamentals of machine learning, probability and statistics. You will also necessarily be comfortable with Python programming and the use of numeric/matrix libraries such as numpy or pytorch. You’ll also be expected to read and try to understand scientific papers along the way.

Lectures and assigned reading

This year the lectures for this course will be given by Professor Jonathon Hare (email), Dr Antonia Marcu (email) and Dr Shoaib Ehsan (email). We have a capable team of PhD students to facilitate the lab sessions and run some of our guest lectures.

There will be three lectures each week: Mondays at 3PM, Wednesdays at 9AM and Fridays at 10AM. Labs take place for 8 weeks, starting in week 2, from 4PM - 6PM on Tuesdays in Zepler L3. The lectures and labs will all take place in person.

By taking part in this module we expect you to turn up to the lectures and get involved - asking questions and provoking discussion is positively encouraged. Some of the lecture slots will be used for “seminars” where will discuss and work through a scientific paper in detail; you will need to prepare for these by reading the paper carefully in advance. Some of the slots will be used for a series of guest lectures covering a range of topics.

The current working timetable/plan is below, and illustrates the topics we intend to cover, but this will evolve as the course progresses. Many of the lectures are coupled with assigned reading materials that you should read before the lecture takes place. This will broaden your understanding of the topic whilst giving you the skills required to read and understand the key points from recent research literature. The lectures are broadly broken into three groups: fundamentals (weeks 1-5), architectures/models (weeks 5-8), and advanced topics (weeks 9-12).

Week Date Location Topic Handouts Reading Material Lecture Video
1 29-Jan 02/1085 Lecture: Differentiable Programming: How does pre-university calculus relate to AI and the future of computer programming? diffprog-handouts.pdf Chapter 1 of Jon’s unfinished book Panopto link
  31-Jan 46/2005 Lecture: Introduction to the module, coursework, labs & quizzes. intro-handouts.pdf   Panopto link
  02-Feb 27/2001 Lecture: Review of fundamentals mlreview-handouts.pdf CH 3 of Michael Nielsen’s Book Panopto link; Extra info on tensors
2 05-Feb 02/1085 Lecture: The Power of Differentiation differentiate-handouts.pdf Chapter 3 of Jon’s unfinished book Panopto Link
  07-Feb 46/2005 Lecture: Automatic Differentiation autograd-handouts.pdf Automatic differentiation in PyTorch Panopto Link
  09-Feb 27/2001 Lecture: Backpropagation backprop-handouts.pdf Learning representations by back-propagating errors Panopto link
3 12-Feb 02/1085 Lecture: Optimisation optimisation-handouts.pdf Adam: A Method for Stochastic Optimization Panopto link
  14-Feb 46/2005 Lecture: Going Deep: Universal approximation, overfitting and regularisation deepnetworks-handouts.pdf Dropout:A Simple Way to Prevent Neural Networks from Overfitting Panopto Link
  16-Feb 27/2001 Lecture: Convolutional Networks Convolution-handouts.pdf handwritten digit recognition with a back-propagation network Panopto Link
4 19-Feb 02/1085 Lecture: Networks Architectures for image classification Architectures-handouts.pdf ImageNet Classification with Deep Convolutional Neural Networks, Striving for Simplicity: The All Convolutional Net, Very Deep Convolutional Networks for Large-Scale Image Recognition, Going Deeper with Convolutions, Deep Residual Learning for Image Recognition Panopto Link
  21-Feb 46/2005 Lecture: Networks Architectures for image classification (II) as above   Panopto Link
  23-Feb 27/2001 Lecture: Embeddings Embeddings-handout.pdf Efficient Estimation of Word Representations in Vector Space Panopto Link
5 26-Feb 02/1085 Lecture: Recurrent Neural Networks rnn-handout.pdf The Unreasonable Effectiveness of Recurrent Neural Networks Panopto Link
  28-Feb 46/2005 Lecture: LSTMs and GRUs lstm-handout.pdf Recurrent Neural Network Regularization Panopto link
  01-Mar 27/2001 Guest Lecture: Experiment design      
6 04-Mar 02/1085 TBC: Applications 1      
  06-Mar 46/2005 TBC: Applications 2      
  08-Mar 27/2001 TBC: Applications 3      
7 11-Mar 02/1085 TBC: Applications 4      
  13-Mar 46/2005 Lecture: Auto-encoders, unsupervised learning and self-supervision vaes-handout.pdf Blog Post on Autoencoders Panopto Link
  15-Mar 27/2001 Lecture: Differentiable relaxations (sampling, etc.) relaxation-handout.pdf   Panopto Link
8 18-Mar 02/1085 Lecture: Generative Models Part 1: Differentiable Generator Networks gans-handout.pdf   Panopto Link
  20-Mar 46/2005 Lecture: Generative Models Part 2: Variational Autoencoders gans-handout.pdf Autoencoding Variational Bayes  
  22-Mar 27/2001 Lecture: Generative Models Part 3: Generative Adversarial Networks gans-handout.pdf GANs, DCGANs Panopto Link
9 22-Apr 02/1085 Lecture: Attention. attention-handout.pdf Attention Is All You Need Panopto Link
  24-Apr 46/2005 Guest Lecture: Emergent Communication      
  26-Apr 27/2001 Seminar: More on the Transformer   Attention Is All You Need, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Panopto Link
10 29-Apr 02/1085 Lecture: Understanding and comparing representations     Panopto Link
  01-May 46/2005 Seminar: Set prediction   Featurewise Sort Pooling, Deep Set Prediction Networks Panopto Link
  03-May 27/2001 Seminar: Biases in Gradient Descent   The Implicit Bias of Gradient Descent on Separable Data, Gradient Starvation: A Learning Proclivity in Neural Networks Panopto Link
11 06-May 02/1085 BANK HOLIDAY      
  08-May 46/2005 Seminar: Self-supervised learning   Barlow Twins, A Simple Framework for Contrastive Learning of Visual Representations, Masked Autoencoders Are Scalable Vision Learners Panopto link
  10-May 27/2001 Guest Lecture: Graph Networks     Panopto Link
12 13-May 02/1085 Seminar: I’m a Deep Learner AMA     Panopto Link
  15-May 46/2005 NO LECTURE      
  17-May 27/2001 NO LECTURE      

Assorted topic lectures

These are bonus lectures/talks on topics that were requested by students in previous years that you can watch. If there are additional topics that you would like covered, then let us know.

Topic Description Handouts/slides Video
Distributed Learning How can you distribute large models and data over many machines? This is a huge topic, but I made two lectures for advanced machine learning on it (which I’ve also made available here in case you’re not taking it) which cover the basics of both the hardware bottlenecks and the software mitigations to these bottlenecks. Interactive slides and handouts Part 1
Part 2
Attention is (possibly) all you need Recent trends, particularly in models for mining textual data, have used “attentional” mechanisms to get breakthrough performance and move away from recurrent networks; what is this attention and how does it work?   link
Neural architecture search A few people have asked how you design a network architecture; that’s quite a difficult question as it relies on a lot of intuition (possibly with some inspiration from biology) and trial & error. There is an alternative though… Why not let the network design itself? There are a number of approaches to what is called Neural Architecture Search, but most use horribly inefficient Reinforcement Learning, so we’ll just take a little look at a nifty differentiable approach called “DARTS”.   link
Hardware Considerations Deep networks typically require power-hungry hardware and lots of memory. Can you reduce the requirements and optimise for lower-powered hardware?   link

Labs

For 8 of the weeks (starting week 2) we are organising a 2-hour lab session in which you will need to complete a series of worksheets. The worksheets have been designed to put the theory covered in the lectures into context, and the equip you with practical skills in implementing and training differentiable programs. A team of PhD-student demonstrators will be available in the lab to help you with any questions you might have about the topics you are working on.

40% of the marks for the module are for lab work. Each of the 8 lab sessions will be accompanied by an additional assessed exercise for you to work through in your own time. You will have to work through the exercises by yourself and succinctly write-up your findings. You will submit your answers/findings/working to all the assessed exercises to handin in week 11 for marking (7th May, 16:00). Each of the 8 exercises will be worth 5% of your overall module mark. We recommend that you do the exercise accompanying the lab as soon as possible after the lab session, rather than leaving them all to the end.

Labs will start in the second week (10th Feb) 4-6 on Friday afternoons. The labs take place physically in a computer room (Zepler L3 labs) with the demonstrator team (and Jon & Antonia when possible). The demonstrators can offer advice on both the labs as well as the group coursework, however you should not ask them about the assessed lab exercises that you complete after the lab.

The full lab schedule is below:

Week Date Location Topic Exercise Link
1 30-Jan NO LAB    
2 06-Feb Zepler L3 Introducing PyTorch Lab 1 Exercise
3 13-Feb Zepler L3 Automatic Differentiation Lab 2 Exercise
4 20-Feb Zepler L3 Optimisation Lab 3 Exercise
5 27-Feb Zepler L3 Implementing simple Neural Networks using PyTorch and Torchbearer Lab 4 Exercise
6 05-Mar Zepler L3 Implementing and training Convolutional Neural Networks using PyTorch and Torchbearer Lab 5 Exercise
7 12-Mar Zepler L3 Using pretrained models and transfer learning Lab 6 Exercise
8 19-Mar Zepler L3 Recurrent Networks, Sequence Prediction and Embeddings Lab 7 Exercise
9 23-Apr Zepler L3 Autoencoders and Deep Generative Models Lab 8 Exercise
10 30-Apr NO LAB    
11 07-May NO LAB    
12 14-May NO LAB    

Note: I’ve made all the worksheet links available from last year. Please don’t be surprised if we make some updates before each session! We’re also actively updating the assessed exercises and will release these nearer the time.

Online Quizzes

There will be two assessed online-quizzes; We are planning for these to be on the 28th Feb and 15th May. These will be available on blackboard for a 24 hour period and once started you must complete them within one hour. The quizzes must be taken independently by yourself and you should not share questions/answers with others.

Coursework assignment

Information on the coursework assignment (worth 40% of the module) is here.

Where to get additional help

Talk to us! You are more than welcome to arrange to meet to discuss issues related to the course during lab sessions or by appointment. The lab sessions are also facilitated by a team of our PhD students who are experts in the deep learning / differentiable programming field in their own right (many of them have published work in this space, or are close to achieving that). We can be reached by teams or Jon’s email, Antonia’s email or Shoaib’s email.

Copyright ©2023 The University of Southampton. All rights reserved.