Seminar: Adaptation and Fine-Tuning of Foundation Models

Foundation models are pretrained once on broad and heterogeneous data distributions; however, real-world deployment typically requires reliable performance in narrow, specialized domains. This seminar will discuss fundamental and recent approaches to address the following question: How do we specialize foundation models to a target domain under computational and data constraints?

To explore this question, we will read and discuss basic and state-of-the-art methods on how to systematically adapt and fine-tune large models through parameter-efficient fine-tuning, prompt engineering, test-time adaptation, context augmentation and many more.

Course TitleTabular Machine Learning
Course IDINF-MSc-102
Registrationdrop me an email
ECTS3
Time[tentative] Wednesdays, 10:15-11:45
Languageenglish
#participantsmax 10
Locationin-person JvF25; seminar room 4th floor
organized byKatharina Eggensperger w/ Amir Rezaei Balef, Mykhailo Koshil

Requirements

Familiarity with foundations of deep learning, including transformer architectures and in-context learning.

Topics

| Date | Content | |————|——————–| | 08.04.2026 | intro I | | 15.04.2026 | intro II | | tba | sessions | | 22.04.2026 | final presentation |

Stay tuned while we compile a list of papers.

How the seminar will look like?

We will regularly throughout the semester. In the first few weeks, we will start with introductory lectures on adapting foundation models and how to critically review and present research papers. After that, we will have several sessions with presentations, followed by discussions.

Other Important information

Grading/Presentations: Grades will be based on your presentation, slides, active participation and a short report. Further details will be discussed in the introductory sessions.