Events

Upcoming events

Illuminating Generative AI: Mapping Knowledge in Large Language Models

Abhilasha Ravichander Max Planck Institute for Software Systems
03 Dec 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, ...
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, and how models must be redesigned to address them. In this talk, I will first describe my work in investigating large language models to illuminate when models acquire knowledge and capabilities. Then, I will describe my work on building methods to enable data transparency for large language models, that allows practitioners to make sense of the information available to models. Finally, I will describe work on understanding why large language models produce incorrect knowledge, and implications for building the next generation of responsible AI systems. 
Read more

Fragments of Hilbert’s Program

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Hilbert’s dream of mechanising all of mathematics was dealt fatal blows by Gödel, Church, and Turing in the 1930s, almost a hundred years ago. Paradoxically, assisted and automated theorem proving have never been as popular as they are today! Motivated by algorithmic problems in discrete dynamics, non-linear arithmetic, and program analysis, we examine the decidability of various logical theories over the natural numbers, and discuss a range of open questions at the intersection of logic, automata theory, ...
Hilbert’s dream of mechanising all of mathematics was dealt fatal blows by Gödel, Church, and Turing in the 1930s, almost a hundred years ago. Paradoxically, assisted and automated theorem proving have never been as popular as they are today! Motivated by algorithmic problems in discrete dynamics, non-linear arithmetic, and program analysis, we examine the decidability of various logical theories over the natural numbers, and discuss a range of open questions at the intersection of logic, automata theory, and number theory. No prior knowledge of any of these fields will be assumed.
Read more

Recent events

How to Manage a Hotel Desk? Stable Perfect Hashing in the Incremental Setting

Guy Even MPI-INF - D1
05 Nov 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Many modern applications—from large-scale databases to network routers and genome repositories—depend on maintaining large dynamic sets of elements. Efficient management of these sets requires data structures that can quickly support insertions and deletions, answer queries such as "Is this element in the set?" or "What is the value associated with this element?", and assign distinct short keys to elements as the set grows.

The field of data structures is concerned with specifying functionality, abstracting computational models, ...
Many modern applications—from large-scale databases to network routers and genome repositories—depend on maintaining large dynamic sets of elements. Efficient management of these sets requires data structures that can quickly support insertions and deletions, answer queries such as "Is this element in the set?" or "What is the value associated with this element?", and assign distinct short keys to elements as the set grows.

The field of data structures is concerned with specifying functionality, abstracting computational models, designing efficient representations, and analyzing the running time and memory requirements of algorithms over these representations. Classical data structures developed for representing sets include dictionaries, retrieval data structures, filters, and perfect hashing.

In this talk, I will explore these issues through the lens of perfect hashing, a method for assigning each element a distinct identifier, or hashcode, with no collisions. We will focus on how to simultaneously satisfy several competing design goals:

Small space: using near-optimal memory proportional to the set’s size.

Fast operations: supporting constant-time insertions, deletions, and queries.

Low redundancy: keeping the range of hashcodes close to the set’s size.

Stability: ensuring that each element’s hashcode remains unchanged while it stays in the set.

Extendability: adapting automatically to unknown or growing data sizes.

This talk is based on joint work with Ioana Bercea.
Read more

Accountable Multi-Agent Sequential Decision Making

Stelios Triantafyllou Max Planck Institute for Software Systems
30 Oct 2025, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Proposal
As AI agents increasingly engage in high-stakes decision making, it is essential to assess their accountability in ways that are both fair and interpretable. This involves explaining expected or realized outcomes of multi-agent systems and attributing responsibility for those outcomes to the participating agents. Addressing these challenges is key to fostering societal trust and easing the adoption of AI decision makers. This thesis investigates accountability in multi-agent sequential decision making. We develop methods to attribute responsibility for observed outcomes and overall system performance, ...
As AI agents increasingly engage in high-stakes decision making, it is essential to assess their accountability in ways that are both fair and interpretable. This involves explaining expected or realized outcomes of multi-agent systems and attributing responsibility for those outcomes to the participating agents. Addressing these challenges is key to fostering societal trust and easing the adoption of AI decision makers. This thesis investigates accountability in multi-agent sequential decision making. We develop methods to attribute responsibility for observed outcomes and overall system performance, design efficient approximation algorithms for otherwise intractable attribution problems, and introduce causal tools to explain how agents’ decisions influence outcomes. Together, these contributions establish theoretical foundations and practical tools for accountable decision making, drawing on and integrating insights from causality, multi-agent reinforcement learning and game theory.
Read more

A Logical Foundation For Multi-Language Interoperability

Brigitte Pientka McGill University
(hosted by Derek Dreyer)
28 Oct 2025, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
SWS Colloquium
Today’s software systems are complex and often made up of parts written in different programming languages with different computational and memory management strategies. This allows programmers to combine different languages and choose the most suitable one for a given problem. It also allows the gradual migration of existing projects from one language to another, or to reuse existing source code.

While this flexibility offers clear advantages, it also introduces significant challenges, as different programming languages may have fundamentally different implementations and may use different runtime environments, ...
Today’s software systems are complex and often made up of parts written in different programming languages with different computational and memory management strategies. This allows programmers to combine different languages and choose the most suitable one for a given problem. It also allows the gradual migration of existing projects from one language to another, or to reuse existing source code.

While this flexibility offers clear advantages, it also introduces significant challenges, as different programming languages may have fundamentally different implementations and may use different runtime environments, which are hard to combine. As a consequence composing parts written in different languages often results in complex interfaces between languages, insufficient flexibility, poor performance, and hard to diagnose errors. This lack of interoperability support can lead to subtle bugs and security vulnerabilities, especially in large or long-lived systems.

Existing foundations for interoperability often assume that we compile languages into a common low-level language. This is however not always realistic. We propose a logical foundation for interoperability where we retain the static and operational semantics of each part. It is grounded in adjoint logic -- a logic that unifies a wide collection of logics through the up-shift and down-shift modalities. We give a Curry-Howard interpretation of this logic where we use the down-shift modality to model foreign function calls, and the up-shift modality to model runtime code generation and execution. Our system is parametric to a user-defined collection of languages and their accessibility relation, which controls how languages interact with each other, allowing the interoperability of various formations of languages. We sketch the statics and an operational semantics together with properties such as accessibility safety, which ensures that languages respect their user-defined boundaries, alongside type safety. Finally, it time permits, we outline how we have used this foundation to reason about the interoperability between languages with fundamentally different runtime implementations such as the interoperability between a quantum and a purely functional language.
Read more

Archive