Events 2025

Illuminating Generative AI: Mapping Knowledge in Large Language Models

Abhilasha Ravichander University of Washington (hosted by Krishna Gummadi)
04 Mar 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having reliable systems is the opacity of the underlying large language models - we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, ...
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having reliable systems is the opacity of the underlying large language models - we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, and how models must be redesigned to address them. In this talk, I will first describe my work in investigating large language models to illuminate when and how they acquire knowledge and capabilities. Then, I will describe my work on building methods to enable greater data transparency for large language models, that allows stakeholders to make sense of the information available to models. Finally, I will describe my work on understanding how this information can get distorted in large language models, and implications for building the next generation of robust AI systems.
Read more

Building the Tools to Program a Quantum Computer

Chenhui Yuan MIT CSAIL (hosted by Catalin Hritcu)
24 Feb 2025, 10:00 pm - 11:00 pm
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
Bringing the promise of quantum computation into reality requires not only building a quantum computer but also correctly programming it to run a quantum algorithm. To obtain asymptotic advantage over classical algorithms, quantum algorithms rely on the ability of data in quantum superposition to exhibit phenomena such as interference and entanglement. In turn, an implementation of the algorithm as a program must correctly orchestrate these phenomena in the states of qubits. Otherwise, the algorithm would yield incorrect outputs or lose its computational advantage. ...
Bringing the promise of quantum computation into reality requires not only building a quantum computer but also correctly programming it to run a quantum algorithm. To obtain asymptotic advantage over classical algorithms, quantum algorithms rely on the ability of data in quantum superposition to exhibit phenomena such as interference and entanglement. In turn, an implementation of the algorithm as a program must correctly orchestrate these phenomena in the states of qubits. Otherwise, the algorithm would yield incorrect outputs or lose its computational advantage. Given a quantum algorithm, what are the challenges and costs to realizing it as a program that can run on a physical quantum computer? In this talk, I answer this question by showing how basic programming abstractions upon which many quantum algorithms rely – such as data structures and control flow – can fail to work correctly or efficiently on a quantum computer. I then show how we can leverage insights from programming languages to re-invent the software stack of abstractions, libraries, and compilers to meet the demands of quantum algorithms. This approach holds out a promise of expressive and efficient tools to program a quantum computer and practically realize its computational advantage.
Read more

On Fairness, Invariance and Memorization in Machine Decision and Deep Learning Algorithms

Till Speicher Max Planck Institute for Software Systems
24 Feb 2025, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
As learning algorithms become more capable, they are used to tackle an increasingly large spectrum of tasks. Their applications range from understanding images, speech and natural language to making socially impactful decisions, such as about people's eligibility for loans and jobs. Therefore, it is important to better understand both the consequences of algorithmic decisions and the mechanisms by which algorithms arrive at their outputs. Of particular interest in this regard are fairness when algorithmic decisions impact people's lives and the behavior of deep learning algorithms, ...
As learning algorithms become more capable, they are used to tackle an increasingly large spectrum of tasks. Their applications range from understanding images, speech and natural language to making socially impactful decisions, such as about people's eligibility for loans and jobs. Therefore, it is important to better understand both the consequences of algorithmic decisions and the mechanisms by which algorithms arrive at their outputs. Of particular interest in this regard are fairness when algorithmic decisions impact people's lives and the behavior of deep learning algorithms, the most powerful but also opaque type of learning algorithm. To this end, this thesis makes two contributions: First, we study fairness in algorithmic decision-making. At a conceptual level, we introduce a metric for measuring unfairness in algorithmic decisions based on inequality indices from the economics literature. We show that this metric can be used to decompose the overall unfairness for a given set of users into between- and within-subgroup components and highlight potential tradeoffs between them, as well as between fairness and accuracy. At an empirical level, we demonstrate the necessity for studying fairness in algorithmically controlled systems by exposing the potential for discrimination that is enabled by Facebook's advertising platform. In this context, we demonstrate how advertisers can target ads to exclude users belonging to protected sensitive groups, a practice that is illegal in domains such as housing, employment and finance, and highlight the necessity for better mitigation methods.

The second contribution of this thesis is aimed at better understanding the mechanisms governing the behavior of deep learning algorithms. First, we study the role that invariance plays in learning useful representations. We show that the set of invariances possessed by representations is of critical importance in determining whether they are useful for downstream tasks, more important than many other factors commonly considered to determine transfer performance. Second, we investigate memorization in large language models, which have recently become very popular. By training models to memorize random strings, we uncover a rich and surprising set of dynamics during the memorization process. We find that models undergo two phases during memorization, that strings with lower entropy are harder to memorize, that the memorization dynamics evolve during repeated memorization and that models can recall tokens in random strings with only a very restricted amount of information.
Read more

From mechanisms to cognition in neural networks

Erin Grant University College London (hosted by Mariya Toneva)
20 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Neural networks optimized with generic learning objectives acquire representations that support remarkable behavioural flexibility—from learning from few examples to analogical reasoning—previously seen as uniquely human. While these artificial learning systems simulate how cognitive capacities can emerge through experience, these capacities arise from complex interactions between architecture, learning algorithm, and training data that we struggle to interpret and validate, limiting the value of neural networks as scientific models of cognition. My research addresses this epistemic challenge by connecting high-level computational properties of neural systems to their low-level mechanistic details, ...
Neural networks optimized with generic learning objectives acquire representations that support remarkable behavioural flexibility—from learning from few examples to analogical reasoning—previously seen as uniquely human. While these artificial learning systems simulate how cognitive capacities can emerge through experience, these capacities arise from complex interactions between architecture, learning algorithm, and training data that we struggle to interpret and validate, limiting the value of neural networks as scientific models of cognition. My research addresses this epistemic challenge by connecting high-level computational properties of neural systems to their low-level mechanistic details, making these systems more interpretable and manipulable for science and practice alike. I will present two case studies demonstrating this approach: how meta-learning in neural networks can be reinterpreted through the lens of hierarchical Bayesian inference, and how sparse representations can emerge naturally through the dynamics of learning in neural networks. Through these examples, I'll illustrate how interpreting and analyzing neural networks sheds light on their emergent computational properties, laying the groundwork for a more productive account of how cognitive capacities arise in neural systems.
Read more

Computational Representations for User Interfaces

Yue Jiang Aalto University (hosted by Adish Singla)
18 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Traditional "one-size-fits-all" user interfaces (UIs) often fail to provide the necessary adaptability, leading to challenges in accommodating varying contexts and enhancing user capabilities. This talk explores my research on developing intelligent UIs that bridge the gap between static design and dynamic user engagement. My research focuses on facilitating the creation of intelligent UIs that support two key areas: assisting designers in building adaptive systems and capturing user behaviors to enable automatic interface adaptation. In this talk, ...
Traditional "one-size-fits-all" user interfaces (UIs) often fail to provide the necessary adaptability, leading to challenges in accommodating varying contexts and enhancing user capabilities. This talk explores my research on developing intelligent UIs that bridge the gap between static design and dynamic user engagement. My research focuses on facilitating the creation of intelligent UIs that support two key areas: assisting designers in building adaptive systems and capturing user behaviors to enable automatic interface adaptation. In this talk, I will focus on how to develop computational representations that embed domain-specific knowledge into AI models, providing intelligent suggestions while ensuring that designers maintain control over the design process. Additionally, I will discuss how to develop neural models that simulate and predict user behaviors, such as eye tracking on UIs, aligning interactions with users' unique abilities and preferences.
Read more

Foundation models of human cognition

Marcel Binz University of Marburg (hosted by Yixin Zou)
13 Feb 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this -- it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. ...
Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this -- it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. Furthermore, I outline my vision for how to use such behaviorally predictive models to advance our understanding of human cognition, as well as how they can be scaled to naturalistic environments.
Read more

Measuring and Improving Fairness and Resilience Across the Blockchain Ecosystem

Lioba Heimbach ETH Zurich (hosted by Krishna Gummadi)
12 Feb 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. ...
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. In the first part of this talk, I will explore how financial value originating at the application layer can threaten the consensus layer, focusing on non-atomic arbitrage — arbitrage between on-chain and off-chain exchanges. I will demonstrate how this value, despite originating at the application layer, introduces centralizing forces and security vulnerabilities in the consensus layer. In the second part, I will show how these dynamics operate in the opposite direction. Specifically, I will highlight privacy flaws in Ethereum’s P2P network that threaten the consensus layer by enabling attacks targeting application layer value. As I will demonstrate, the P2P network leaks validator locations. This vulnerability allows malicious validators (i.e., consensus layer participants) to launch targeted attacks on validators handling blocks with significant application layer value and scoop the value from those blocks.
Read more

How Do We Evaluate and Mitigate AI Risks?

Maksym Andriushchenko EPFL (hosted by Christof Paar)
11 Feb 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, ...
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, my work aims to provide effective safeguards for AI deployment. Ultimately, I advocate for a systematic approach to AI risk mitigation that integrates technical solutions with real-world considerations to ensure the safe and responsible use of AI systems.
Read more

Spatio-Temporal AI for Long-term Human-centric Robot Autonomy

Lukas Schmid Massachusetts Institute of Technology (hosted by Bernt Schiele)
10 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
The ability to build an actionable representation of the environment of a robot is crucial for autonomy and prerequisite to a large variety of applications, ranging from home, service, and consumer robots to social, care, and medical robotics to industrial, agriculture and disaster response applications. Notably, a large part of the promise of autonomous robots depends on long-term operation in domains shared with humans and other agents. These environments are typically highly complex, semantically rich, and highly dynamic with agents frequently moving through and interacting with the scene. ...
The ability to build an actionable representation of the environment of a robot is crucial for autonomy and prerequisite to a large variety of applications, ranging from home, service, and consumer robots to social, care, and medical robotics to industrial, agriculture and disaster response applications. Notably, a large part of the promise of autonomous robots depends on long-term operation in domains shared with humans and other agents. These environments are typically highly complex, semantically rich, and highly dynamic with agents frequently moving through and interacting with the scene. This talk presents an autonomy pipeline combining perception, prediction, and planning to address these challenges. We first present methods to detect and represent complex semantics, short-term motion, and long-term changes for real-time robot perception in a unified framework called Khronos. We then show how Dynamic Scene Graphs (DSGs) can represent semantic symbols in a task-driven fashion and facilitate reasoning about the scene, such as the prediction of likely future outcomes based on the data the robot has already collected. Lastly, we show how robots as embodied agents can leverage these actionable scene representations and predictions to complete tasks such as actively gathering data that helps them improve their world models, perception, and action capabilities fully autonomously over time. The presented methods are demonstrated on-board fully autonomous aerial, legged, and wheeled robots, run in real-time on mobile hardware, and are available as open-source software.
Read more

Leveraging Sociotechnical Security and Privacy to Address Online Abuse

Miranda Wei University of Washington (hosted by Krishna Gummadi)
06 Feb 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
The prevalence and severity of online abuse are on the rise, from toxic content on social media to image-based sexual abuse, as new technologies are weaponized by people who do harm. Further, this abuse disproportionately harms people already marginalized in society, creating unacceptable disparities in safety and reinforcing oppression. Working in the areas of both computer security and privacy (S&P) and human-computer interaction (HCI), I address online abuse as the next frontier of S&P challenges. In this talk, ...
The prevalence and severity of online abuse are on the rise, from toxic content on social media to image-based sexual abuse, as new technologies are weaponized by people who do harm. Further, this abuse disproportionately harms people already marginalized in society, creating unacceptable disparities in safety and reinforcing oppression. Working in the areas of both computer security and privacy (S&P) and human-computer interaction (HCI), I address online abuse as the next frontier of S&P challenges. In this talk, I discuss my approach to sociotechnical threat modeling that (1) characterizes emerging S&P threats in digital safety, with particular attention to the technical and societal factors at play, (2) evaluates the existing support for online abuse, taking an ecosystem-level perspective, and (3) develops conceptual tools that bridge S&P and HCI towards societally informed S&P research. I conclude by outlining how sociotechnical security and privacy can work towards a world where all people using technology feel safe and connected.
Read more

Practical Privacy via New Systems and Abstractions

Kinan Dak Albab Brown University (hosted by Peter Schwabe)
05 Feb 2025, 1:00 pm - 2:00 pm
Virtual talk
CIS@MPG Colloquium
Data privacy has become a focal point for public discourse. In response, Data protection and privacy regulations have been enacted across the world, including the GDPR and CCPA, and companies make various promises to end-users in their privacy policies. However, high profile privacy violations remain commonplace, in part because complying with privacy regulations and policies is challenging for applications and developers. This talk demonstrates how we can help developers achieve privacy compliance by designing new privacy-conscious systems and abstractions. ...
Data privacy has become a focal point for public discourse. In response, Data protection and privacy regulations have been enacted across the world, including the GDPR and CCPA, and companies make various promises to end-users in their privacy policies. However, high profile privacy violations remain commonplace, in part because complying with privacy regulations and policies is challenging for applications and developers. This talk demonstrates how we can help developers achieve privacy compliance by designing new privacy-conscious systems and abstractions. This talk focuses on my work on Sesame (SOSP24), my system for end-to-end compliance with privacy policies in web applications. To provide practical guarantees, Sesame combines new static analysis for data leakage with advances in memory safe languages and lightweight sandboxing, as well as standard industry practices like code review. My work in this area also includes K9db (OSDI23), a privacy-compliant database that supports compliance-by-construction with GDPR-style subject access requests. By creating privacy abstractions at the systems level, we can offer applications privacy guarantees by design, in order to simplify compliance and improve end-user privacy.
Read more

Pushing the Boundaries of Modern Application-Aware Computing Stacks

Christina Giannoula University of Toronto (hosted by Peter Druschel)
03 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Modern computing systems encounter significant challenges related to data movement in applications, such as data analytics and machine learning. Within a compute node, the physical separation of the processor from main memory necessitates retrieving data through a narrow memory bus. In big-data applications running across multiple nodes, data must be exchanged via narrow network interconnects. This movement of data —both within and across compute nodes— causes significant performance and energy overheads in modern and emerging applications. ...
Modern computing systems encounter significant challenges related to data movement in applications, such as data analytics and machine learning. Within a compute node, the physical separation of the processor from main memory necessitates retrieving data through a narrow memory bus. In big-data applications running across multiple nodes, data must be exchanged via narrow network interconnects. This movement of data —both within and across compute nodes— causes significant performance and energy overheads in modern and emerging applications. Moreover, today’s general-purpose computing stacks overlook the particular data needs of individual applications, missing crucial opportunities for untapped performance optimization. In this talk, I will present a cross-stack approach to designing application-aware computing stacks for cutting-edge applications, enabling new synergies between algorithms, systems software, and hardware. Specifically, I will demonstrate how integrating fine-grained application characteristics —such as input features, data access and synchronization patterns— across the layers of general-purpose computing stacks allows for tailoring stack components to meet the application’s specific data needs. This integration enables the stack components to work synergistically to reduce unnecessary or redundant data movement during application execution. I will present a few of my research contributions that propose hardware and software solutions for emerging applications, such as deep learning, and by capitalizing on the emerging processing-in-memory paradigm. Finally, I will conclude by outlining my future plans to design application-adaptive and sustainable computing stacks to significantly enhance performance and energy efficiency in cutting-edge applications.
Read more

Legal Research in the Era of Large Language Models

Christoph Engel Max Planck Institute for Research on Collective Goods
15 Jan 2025, 3:00 pm - 4:00 pm
Kaiserslautern building G26, room 111
SWS Colloquium
In a profound sense, the law is an applied field. It exists because society needs rules to function. Even if these rules are seemingly "bright line", in limit cases they require interpretation. Even more so if the rule in question confines itself to enunciate a normative program, and leaves its implementation to the administration and the judiciary. The traditional response is hermeneutical. The legislator translates the normative intention into words. That way, it implicitly delegates spelling out what these words mean to the parties and the authorities involved in dissolving the concrete conflicts of life. ...
In a profound sense, the law is an applied field. It exists because society needs rules to function. Even if these rules are seemingly "bright line", in limit cases they require interpretation. Even more so if the rule in question confines itself to enunciate a normative program, and leaves its implementation to the administration and the judiciary. The traditional response is hermeneutical. The legislator translates the normative intention into words. That way, it implicitly delegates spelling out what these words mean to the parties and the authorities involved in dissolving the concrete conflicts of life. This sketch of the law’s mission explains the traditional character of legal research. If a researcher adopts an "inside view", she engages in a division of labor with practicing lawyers. The quintessential product of this research is a "commentary". The researcher summarizes the state of the art thinking about a statutory provision (and maybe proposes an alternative reading). Alternatively, the researcher adopts an "outside view". In the spirit of a social scientist, she treats the law as her object of study. Typical products are more precise definitions of and empirical investigations into a class of social problems that legal rules are meant to address; or attempts at finding traces of judicial policy in the jurisprudence of a court. Large language models have the potential to deeply affect all of these strands of legal research. As the potential is more easily discernible for the "outside view", the talk will only briefly illustrate in which ways LLMs are likely to fuel this strand of legal research. It will drill deeper into the "inside view", and explain how an important part of this research, the summarization of the jurisprudence on a statutory provision (the guarantee of freedom of assembly in the German Constitution) can already today be delegated to the LLM. It is not difficult to predict that, ten years from now, legal research will look radically different.
Read more