2 minute read

Welcome to the machine learning course, this course is intended for master students able to tackle programming with ease, basics of linear algebra as well as calculus. But no worries, we will also repeat math concepts when needed as well.

➤ What is machine learning?

Whereas there are plenty of definitions for machine learning, let’s keep it simple: Machine learning is about algorithms that learn models from data (and experience). The latter term experience refers to a sub-part of machine learning known as reinforcement learning, something that we will not cover in the lecture.

➤ How is this lecture structured

The lecture follows an reverse-classroom concept originally used by Christina Kratsch in her data science lecture. The event consists of three parts:

  • a theory part, which you have to independently read at home on a weekly basis.
  • a practical part, which we will practice together on-site weekly, and
  • a small project, which you will independently create.

Specifically, we will deal with the following content: 1

Part 0: Because an index always starts with zero

  1. Motivation and latest developments - why is understanding and using machine learning an essential future skill

Part 1: Getting to know the tools and the math

  1. Definitions, tools and basic math concepts - a tale about vectors and tensors
  2. Fundamental principles and challenges of machine learning - how to evaluate machine learning models without cheating

Part 2: Classical ML and building your first machine learning system

  1. Classical machine learning: Vectors and neighbours - developing our first machine learning models
  2. Ensembles and AutoML - combining and optimizing models

Part 3: From linear models to the first deep learning architectures

  1. Linear models and loss functions
  2. Classical neural networks and back-propagation
  3. Convolutional neural networks for computer vision and the foundation model principle

Part 4: Large language models - everything is a token

  1. Large language model fundamentals
  2. Attention and transformers
  1. On the edge of research: Multimodal vision models, In-context learning, etc.

➤ Goals of the lecture

  1. You will understand the basic concepts of machine learning.
  2. You will be able to select appropiate ML models for a certain application.
  3. You will have the skills to implement machine learning pipelines for basic problems.
  4. You will be able to evaluate these pipelines and improve them further.

➤ Why but really why is this lecture material in English?

If you work as a machine learning engineer (or a software engineer in general) your main working language (at least when reading) will be English. There is an international community of PhD students around the globe that take care and further develop open-source tools for machine learning. Furthermore, reading current publications and trying to stay-up-to-date is an essential part and can simply not be done in German. Therefore, all documentation and material of the lecture will be in English.

★ How Will Participation Be Evaluated?

All relevant information regarding the evaluation can be found on Moodle.

Further literature

  1. Simon Prince “Understanding Deep Learning” (webpage w/ videos, pdf)
  2. Aston Zhang, Zachary C. Lipton, Mu Li, Alexander J. Smola, “Dive into Deep Learning” (webpage w/ notebooks )
  1. The lecture materials are work in progress and will continuously change and evolve. We will likely cover only a certain part of the topics listed.