Events

Upcoming events

Where are all the fixed points?

Benjamin Kaminski Fachrichtung Informatik - Saarbrücken
14 Jan 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Fixed points are a recurring theme in computer science and related fields: shortest paths, game equilibria, semantics and verification, social choice theory, or dynamical systems are only a few of many instances. Various fixed point theorems - e.g. the famous Kleene fixed point theorem - state that fixed points emerge as limits of suitably seeded fixed point iterations. 

I will showcase in this talk a purely algebraic way of reasoning about fixed points which we call the algebra of iterative constructions (AIC). ...
Fixed points are a recurring theme in computer science and related fields: shortest paths, game equilibria, semantics and verification, social choice theory, or dynamical systems are only a few of many instances. Various fixed point theorems - e.g. the famous Kleene fixed point theorem - state that fixed points emerge as limits of suitably seeded fixed point iterations. 

I will showcase in this talk a purely algebraic way of reasoning about fixed points which we call the algebra of iterative constructions (AIC). It lead us to discover novel fixed point theorems as well as prove fully algebraically well-established ones (e.g. Kleene, Tarksi-Kantorovic, and k-induction). As for the novel fixed point theorems, we will (given a suitable setting) obtain a method that maps any point to two canonical corresponding fixed points of a function by way of a limit of some fixed point iteration.

Our algebra is mechanized in Isabelle/HOL. Isabelle's sledgehammer tool is able to find proofs of fixed point theorems fully automatically whereas sledgehammer is not able to find such proofs relying only on Isabelle’s standard libraries.

From the audience I would like to learn: Do you encounter fixed points or fixed point problems in your work? Do you perhaps work on algorithms for solving fixed point problems?

I look forward to talking to you!
Read more

The Skolem Problem: a century-old enigma at the heart of computation

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.

Recent events

LaissezCloud: A Resource Exchange Platform for the Public Cloud

Tejas Harith Max Planck Institute for Software Systems
16 Dec 2025, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Qualifying Exam
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. ...
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. We address this with LaissezCloud, a new cloud resource management platform that enables continuous resource re-negotiation and re- allocation. Rather than agreeing to a fixed price on resource allocation, LaissezCloud operators and tenants continuously re- negotiate resource prices during execution. Our key insight is that pricing provides a narrow waist that enables the cloud to align incentives between operator and tenants as well as among tenants. Tenants decide the price they are willing to pay based on the current value of a resource during execution. Operators in turn price in current demand as well as infrastructure concerns, such as current power availability, cooling capacity, or carbon intensity. We demonstrate that LaissezCloud scales to typical cloud infrastructure, that applications are easy to adapt to dynamic resource negotiation, and that LaissezCloud improves resource efficiency.
Read more

Collaborative Prediction via Tractable Agreement Protocols

Ira Globus-Harris Cornell University
(hosted by Manuel Gomez Rodriguez)
09 Dec 2025, 10:30 am
Kaiserslautern building G26, room 111
SWS Colloquium
Designing effective collaboration between humans and AI systems is crucial for leveraging their complementary abilities in complex decision tasks. But how should agents possessing unique knowledge—like a human expert and an AI model— interact to reach decisions better than either could alone? In this talk, I will introduce a collection of tools based in machine learning theory and algorithmic game theory which allow us to develop efficient "collaboration protocols", where parties iteratively exchange only low-dimensional information— their current predictions or best-response actions—without needing to share underlying features and which guarantee that the agents' final predictions are provably competitive with an optimal predictor with access to their joint features. ...
Designing effective collaboration between humans and AI systems is crucial for leveraging their complementary abilities in complex decision tasks. But how should agents possessing unique knowledge—like a human expert and an AI model— interact to reach decisions better than either could alone? In this talk, I will introduce a collection of tools based in machine learning theory and algorithmic game theory which allow us to develop efficient "collaboration protocols", where parties iteratively exchange only low-dimensional information— their current predictions or best-response actions—without needing to share underlying features and which guarantee that the agents' final predictions are provably competitive with an optimal predictor with access to their joint features. Together, these results offer a new foundation for building systems that achieve the power of pooled knowledge through tractable interaction alone.
Read more

Illuminating Generative AI: Mapping Knowledge in Large Language Models

Abhilasha Ravichander Max Planck Institute for Software Systems
03 Dec 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, ...
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, and how models must be redesigned to address them. In this talk, I will first describe my work in investigating large language models to illuminate when models acquire knowledge and capabilities. Then, I will describe my work on building methods to enable data transparency for large language models, that allows practitioners to make sense of the information available to models. Finally, I will describe work on understanding why large language models produce incorrect knowledge, and implications for building the next generation of responsible AI systems. 
Read more

Archive