Events

Upcoming events

Illuminating Generative AI: Mapping Knowledge in Large Language Models

Abhilasha Ravichander Max Planck Institute for Software Systems
03 Dec 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, ...
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, and how models must be redesigned to address them. In this talk, I will first describe my work in investigating large language models to illuminate when models acquire knowledge and capabilities. Then, I will describe my work on building methods to enable data transparency for large language models, that allows practitioners to make sense of the information available to models. Finally, I will describe work on understanding why large language models produce incorrect knowledge, and implications for building the next generation of responsible AI systems. 
Read more

LaissezCloud: A Resource Exchange Platform for the Public Cloud

Tejas Harith Max Planck Institute for Software Systems
16 Dec 2025, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Qualifying Exam
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. ...
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. We address this with LaissezCloud, a new cloud resource management platform that enables continuous resource re-negotiation and re- allocation. Rather than agreeing to a fixed price on resource allocation, LaissezCloud operators and tenants continuously re- negotiate resource prices during execution. Our key insight is that pricing provides a narrow waist that enables the cloud to align incentives between operator and tenants as well as among tenants. Tenants decide the price they are willing to pay based on the current value of a resource during execution. Operators in turn price in current demand as well as infrastructure concerns, such as current power availability, cooling capacity, or carbon intensity. We demonstrate that LaissezCloud scales to typical cloud infrastructure, that applications are easy to adapt to dynamic resource negotiation, and that LaissezCloud improves resource efficiency.
Read more

The Skolem Enigma: A Century-Old Question at the Heart of Computation

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks to determine whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple century-old problem arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks to determine whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple century-old problem arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.

Recent events

Boosting — Empowering Citizens with Behavioral Science

Ralph Hertwig Max Planck Institute for Human Development
(hosted by Krishna Gummadi)
26 Nov 2025, 12:15 pm - 1:15 pm
Kaiserslautern building G26, room 111
AICS Distinguished Speaker Colloquium
Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people's agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. ...
Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people's agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. We argue that addressing the challenges of our time—such as climate change, pandemics, and the threats to liberal democracies and human autonomy posed by digital technologies and choice architectures—calls for fostering capable and engaged citizens as a first line of response to complement slower, systemic approaches. Boosts can be delivered through different means, one being digital tools — the talk will give a few illustrative examples.
Read more

Curriculum Design for Reinforcement Learning Agents

Georgios Tzannetos Max Planck Institute for Software Systems
24 Nov 2025, 2:30 pm - 3:30 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Proposal
Reinforcement learning (RL) enables agents to learn complex behaviours and excel in various domains such as robotics, gaming, and large language models (LLMs). Despite these successes, RL algorithms remain inefficient, rendering the training process challenging and limiting their broader application in real-world settings. Motivated by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training agents in challenging environments. However, existing methods for automatic curriculum design typically require domain-specific hyperparameter tuning, ...
Reinforcement learning (RL) enables agents to learn complex behaviours and excel in various domains such as robotics, gaming, and large language models (LLMs). Despite these successes, RL algorithms remain inefficient, rendering the training process challenging and limiting their broader application in real-world settings. Motivated by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training agents in challenging environments. However, existing methods for automatic curriculum design typically require domain-specific hyperparameter tuning, rely on expensive optimization procedures, or have limited theoretical underpinnings. To address these limitations, we design different curriculum strategies grounded in the pedagogical concept of Zone of Proximal Development. The theoretical and empirical analysis across multiple domains affirms the effectiveness of our strategies. In particular, our strategies are shown to improve the training efficiency of agents under different learning objectives, including uniform performance, target performance, and constrained performance. Finally, addressing a real-world LLM deployment scenario, we show how our curriculum strategy improves the inference-time efficiency of LLMs by compressing models’ chain-of-thought reasoning process.
Read more

How to Manage a Hotel Desk? Stable Perfect Hashing in the Incremental Setting

Guy Even MPI-INF - D1
05 Nov 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Many modern applications—from large-scale databases to network routers and genome repositories—depend on maintaining large dynamic sets of elements. Efficient management of these sets requires data structures that can quickly support insertions and deletions, answer queries such as "Is this element in the set?" or "What is the value associated with this element?", and assign distinct short keys to elements as the set grows.

The field of data structures is concerned with specifying functionality, abstracting computational models, ...
Many modern applications—from large-scale databases to network routers and genome repositories—depend on maintaining large dynamic sets of elements. Efficient management of these sets requires data structures that can quickly support insertions and deletions, answer queries such as "Is this element in the set?" or "What is the value associated with this element?", and assign distinct short keys to elements as the set grows.

The field of data structures is concerned with specifying functionality, abstracting computational models, designing efficient representations, and analyzing the running time and memory requirements of algorithms over these representations. Classical data structures developed for representing sets include dictionaries, retrieval data structures, filters, and perfect hashing.

In this talk, I will explore these issues through the lens of perfect hashing, a method for assigning each element a distinct identifier, or hashcode, with no collisions. We will focus on how to simultaneously satisfy several competing design goals:

Small space: using near-optimal memory proportional to the set’s size.

Fast operations: supporting constant-time insertions, deletions, and queries.

Low redundancy: keeping the range of hashcodes close to the set’s size.

Stability: ensuring that each element’s hashcode remains unchanged while it stays in the set.

Extendability: adapting automatically to unknown or growing data sizes.

This talk is based on joint work with Ioana Bercea.
Read more

Archive