Seminar: Automated Machine Learning and Hyperparameter Optimization

Seminar, University of Tübingen, 2023

Have you ever trained an ML model and wondered how to efficiently optimize its hyperparameters? Then this seminar is for you!

TL;DR AutoML aims to support ML users (and ML researchers) by automating parts of the ML workflow. In this seminar we will read some classic and also recent research papers in the field of AutoML with a focus on Bayesian optimization and AutoML systems for tabular data.

Course TitleAutomated Machine Learning and Hyperparameter Optimization
Course IDML4501f
TimeWednesdays 12 c.t - 14
#participantsup to 20
Locationin-person at Maria-von-Linden-Straße 6; mostly lecture hall ground floor

Why should you attend this seminar?

Besides practicing your scientific communication skills, you will also

  • learn about key contributions in the field of AutoML
  • be able to discuss recent research in the field of hyperparameter optimization and AutoML systems
  • gain experience in reading, understanding and presenting research papers


We strongly recommend that you know the foundations of machine learning and deep learning. Ideally, you also have some experience in applying ML to get the most out of this seminar.


The seminar focuses on understanding the underlying concepts of modern AutoML methods. Since this field is ever-growing, in this semester, we will focus on only a few topics.

Here is a tentative meeting schedule:

18.10Initial Meeting [slides
25.10; seminar room 3.OGHow to give a good presentation / Bayesian Optimization for HPO
01.11No meeting
08.11HPO I (#1, #2)
15.11No meeting
22.11HPO I (#3, #4)
29.11No Meeting
06.12AutoML for Tabular (#6, #7)
13.12AutoML for Tabular (#8) HPO II (#9)
20.12HPO II (#10, #11)
27.12-03.01No meeting; holiday
10.01Intro: NAS
17.01NAS (#12, #13)
24.01NAS (#14) AutoML for Tabular (#7)
31.01No meeting
07.02NAS (#15, #16)

Paper List:


  1. Practical Bayesian Optimization of Machine Learning Algorithms Jasper Snoek, Hugo Larochelle, Ryan P. Adams; NeurIPS 2012
  2. BOHB: Robust and Efficient Hyperparameter Optimization at Scale Stefan Falkner, Aaron Klein, Frank Hutter; ICML 2018
  3. Scalable Global Optimization via Local Bayesian Optimization David Eriksson, Michael Pearce, Jacob Gardner, Ryan D. Turner, Matthias Poloczek; NeurIPS 2019
  4. PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning Neeratyoy Mallik, Edward Bergman, Carl Hvarfner, Danny Stoll, Maciej Janowski, Marius Lindauer, Luigi Nardi, Frank Hutter; NeurIPS 2023

AutoML for Tabular Data

  1. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, Alexander Smola; arXiv 2020
  2. Revisiting Deep Learning Models for Tabular Data Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko; NeurIPS 2021
  3. Why do tree-based models still outperform deep learning on tabular data? Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux: NeurIPS 2022
  4. TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter; ICLR 2023


  1. HEBO: Pushing The Limits of Sample-Efficient Hyper-parameter Optimisation Alexander I. Cowen-Rivers, Wenlong Lyu, Rasul Tutunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Alexandre Max Maraval, Hao Jianye, Jun Wang, Jan Peters, Haitham Bou-Ammar; JAIR 2022
  2. Towards Learning Universal Hyperparameter Optimizers with Transformers Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Richard Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc’Aurelio Ranzato, Sagi Perel, Nando de Freitas, NeurIPS 2022
  3. PFNs4BO: In-Context Learning for Bayesian Optimization Samuel Müller, Matthias Feurer, Noah Hollmann, Frank Hutter, ICML 2023


  1. DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang; ICLR 2019
  2. Understanding and Simplifying One-Shot Architecture Search Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, Quoc Le; ICML 2018
  3. HAT: Hardware-Aware Transformers for Efficient Natural Language Processing Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, Song Han; ACL 2020
  4. Neural Architecture Search without Training Joe Mellor, Jack Turner, Amos Storkey, Elliot J Crowley; ICML 2021
  5. Zero-Cost Proxies for Lightweight NAS Mohamed S Abdelfattah, Abhinav Mehrotra, Łukasz Dudziak, Nicholas Donald Lane; ICLR 2021

How the seminar will look like?

We will meet each week (with a few exceptions). In the first two weeks, we will start with introductory lectures on automated machine learning, empirical experimentation and how to critically review and present research papers. After that, each week, we will have presentations, followed by discussions.

Other Important information

Registration: Please register on ILIAS. The number of participants is limited and the registration opens on September 29th, noon. There will be a waiting list (please unregister to let other people take your place). Please come to the first lecture even if you are still on the waiting list. If you’re enrolled and don’t show up, your spot will be freed for someone on the waiting list.

Grading/Presentations: Grades will vbe based on your presentation, slides and active participation in the seminar. Further details will be discussed in the intro sessions.