Events

Upcoming events

Pushing the Boundaries in Stateless Model Checking

Iason Marmanis Max Planck Institute for Software Systems
28 Jan 2026, 3:30 pm - 4:30 pm
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
Stateless model checking (SMC) verifies a concurrent program by systematically exploring its state space. To combat the state-space explosion problem, SMC is frequently combined with Dynamic Partial Order Reduction (DPOR), a technique that avoids exploring executions that are deemed equivalent to one another. Still, DPOR’s scalability is limited by the size of the input program.

This thesis improves scalability by (i) providing direct support for common coding patterns that would otherwise have to be handled inefficiently, ...
Stateless model checking (SMC) verifies a concurrent program by systematically exploring its state space. To combat the state-space explosion problem, SMC is frequently combined with Dynamic Partial Order Reduction (DPOR), a technique that avoids exploring executions that are deemed equivalent to one another. Still, DPOR’s scalability is limited by the size of the input program.

This thesis improves scalability by (i) providing direct support for common coding patterns that would otherwise have to be handled inefficiently, and (ii) combining DPOR with other state-space reduction techniques. Key to our contributions is a DPOR framework that generalizes an existing state-of-the-art algorithm.
Read more

The Skolem Problem: a century-old enigma at the heart of computation

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.

Pushing the Boundary on Automated Modular Floating-Point Verification

Rosa Abbasi Max Planck Institute for Software Systems
27 Mar 2026, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
Floating-point numbers often represent real numbers in computer systems. They are applicable in many domains, including embedded systems, machine learning, and scientific computing. Despite their widespread use, they pose some difficulties. Floating-point numbers and operations typically suffer from roundoff errors, making computations over floating-points inaccurate with respect to a real-valued specification. Moreover, the IEEE 754 floating-point standard, a fundamental element in formalizing floating-point arithmetic for today’s computers, presents additional challenges due to special values and resulting unintuitive behaviors. ...
Floating-point numbers often represent real numbers in computer systems. They are applicable in many domains, including embedded systems, machine learning, and scientific computing. Despite their widespread use, they pose some difficulties. Floating-point numbers and operations typically suffer from roundoff errors, making computations over floating-points inaccurate with respect to a real-valued specification. Moreover, the IEEE 754 floating-point standard, a fundamental element in formalizing floating-point arithmetic for today’s computers, presents additional challenges due to special values and resulting unintuitive behaviors. This thesis has three main contributions that address existing gaps in automated reasoning about floating-point arithmetic, making it easier for developers and researchers to understand, verify, and trust the floating-point computations in their programs. First, we introduce the first floating-point support in a deductive verifier for the Java programming language. Our support in the KeY verifier automatically handles floating-point arithmetic and transcendental functions. We achieve this with a combination of delegation to external SMT solvers on one hand, and rule-based reasoning within KeY on the other, exploiting the complementary strengths of both approaches. As a result, this approach can prove functional floating-point properties for realistic programs. Second, inspired by KeY’s treatment of method calls and the need for a scalable roundoff error analysis, we present the first modular optimization-based roundoff error analysis for non-recursive procedural floating-point programs. Our key idea is to achieve modularity while maintaining reasonable accuracy by automatically computing procedure summaries that are a function of the input parameters. Technically, we extend an existing optimization-based roundoff error analysis and show how to effectively use first-order Taylor approximations to compute precise procedure summaries, and how to integrate those to obtain end-to-end roundoff error bounds. Third, our experience using SMT solvers to discharge KeY’s floating-point verification conditions revealed unexpected performance behavior, motivating a systematic study of floating-point reasoning in SMT solvers. We propose a metamorphic testing approach that uses semantics-preserving rewrite rules, focusing on floating-point special values, to uncover unexpected performance behavior in SMT solvers’ handling of floating-point formulas, such as an increase in solving time when the SMT queries are simplified. Using real-world test inputs, our approach can identify such performance bugs for every SMT solver tested.
Read more

Recent events

On Predictive Accuracy and Fairness in Human-AI Teams

Nastaran Okati Max Planck Institute for Software Systems
26 Jan 2026, 1:30 pm - 2:30 pm
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
Human-AI teams are increasingly deployed across high-stakes domains such as medical diagnosis, content moderation, and recruitment. These teams strive to leverage the strengths of both human decision-makers and AI models, while mitigating their respective weaknesses. For instance, such hybrid systems can combine the efficiency and quality of AI-generated decisions with human experience and domain knowledge. Moreover, they can be free of statistical biases in AI models and cognitive biases inherent to human decision-making. The combined system's performance can hence surpass that of the human or the AI model in isolation, ...
Human-AI teams are increasingly deployed across high-stakes domains such as medical diagnosis, content moderation, and recruitment. These teams strive to leverage the strengths of both human decision-makers and AI models, while mitigating their respective weaknesses. For instance, such hybrid systems can combine the efficiency and quality of AI-generated decisions with human experience and domain knowledge. Moreover, they can be free of statistical biases in AI models and cognitive biases inherent to human decision-making. The combined system's performance can hence surpass that of the human or the AI model in isolation, achieving human-AI complementarity.

Despite the prevalence of such hybrid teams, most existing approaches remain heuristic-driven, ignore human users and their error models, and hence fail to optimize the human-AI team performance. My thesis strives to close these gaps and fully unlock this potential for complementarity by (i) optimizing for human-AI collaboration, rather than model-centric performance—ensuring these models best support and complement human decision-makers who utilize them—and (ii) designing efficient post-processing algorithms that ensure fairness of high-stakes decisions made by these teams—thereby supporting people who are affected by such decisions.
Read more

Where are all the fixed points?

Benjamin Kaminski Fachrichtung Informatik - Saarbrücken
14 Jan 2026, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Fixed points are a recurring theme in computer science and related fields: shortest paths, game equilibria, semantics and verification, social choice theory, or dynamical systems are only a few of many instances. Various fixed point theorems - e.g. the famous Kleene fixed point theorem - state that fixed points emerge as limits of suitably seeded fixed point iterations. 

I will showcase in this talk a purely algebraic way of reasoning about fixed points which we call the algebra of iterative constructions (AIC). ...
Fixed points are a recurring theme in computer science and related fields: shortest paths, game equilibria, semantics and verification, social choice theory, or dynamical systems are only a few of many instances. Various fixed point theorems - e.g. the famous Kleene fixed point theorem - state that fixed points emerge as limits of suitably seeded fixed point iterations. 

I will showcase in this talk a purely algebraic way of reasoning about fixed points which we call the algebra of iterative constructions (AIC). It lead us to discover novel fixed point theorems as well as prove fully algebraically well-established ones (e.g. Kleene, Tarksi-Kantorovic, and k-induction). As for the novel fixed point theorems, we will (given a suitable setting) obtain a method that maps any point to two canonical corresponding fixed points of a function by way of a limit of some fixed point iteration.

Our algebra is mechanized in Isabelle/HOL. Isabelle's sledgehammer tool is able to find proofs of fixed point theorems fully automatically whereas sledgehammer is not able to find such proofs relying only on Isabelle’s standard libraries.

From the audience I would like to learn: Do you encounter fixed points or fixed point problems in your work? Do you perhaps work on algorithms for solving fixed point problems?

I look forward to talking to you!
Read more

LaissezCloud: A Resource Exchange Platform for the Public Cloud

Tejas Harith Max Planck Institute for Software Systems
16 Dec 2025, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Qualifying Exam
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. ...
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. We address this with LaissezCloud, a new cloud resource management platform that enables continuous resource re-negotiation and re- allocation. Rather than agreeing to a fixed price on resource allocation, LaissezCloud operators and tenants continuously re- negotiate resource prices during execution. Our key insight is that pricing provides a narrow waist that enables the cloud to align incentives between operator and tenants as well as among tenants. Tenants decide the price they are willing to pay based on the current value of a resource during execution. Operators in turn price in current demand as well as infrastructure concerns, such as current power availability, cooling capacity, or carbon intensity. We demonstrate that LaissezCloud scales to typical cloud infrastructure, that applications are easy to adapt to dynamic resource negotiation, and that LaissezCloud improves resource efficiency.
Read more

Archive