Events

Upcoming events

Computational Representations for User Interfaces

Yue Jiang Aalto University (hosted by Adish Singla)
18 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Traditional "one-size-fits-all" user interfaces (UIs) often fail to provide the necessary adaptability, leading to challenges in accommodating varying contexts and enhancing user capabilities. This talk explores my research on developing intelligent UIs that bridge the gap between static design and dynamic user engagement. My research focuses on facilitating the creation of intelligent UIs that support two key areas: assisting designers in building adaptive systems and capturing user behaviors to enable automatic interface adaptation. In this talk, ...
Traditional "one-size-fits-all" user interfaces (UIs) often fail to provide the necessary adaptability, leading to challenges in accommodating varying contexts and enhancing user capabilities. This talk explores my research on developing intelligent UIs that bridge the gap between static design and dynamic user engagement. My research focuses on facilitating the creation of intelligent UIs that support two key areas: assisting designers in building adaptive systems and capturing user behaviors to enable automatic interface adaptation. In this talk, I will focus on how to develop computational representations that embed domain-specific knowledge into AI models, providing intelligent suggestions while ensuring that designers maintain control over the design process. Additionally, I will discuss how to develop neural models that simulate and predict user behaviors, such as eye tracking on UIs, aligning interactions with users' unique abilities and preferences.
Read more

From mechanisms to cognition in neural networks

Erin Grant University College London (hosted by Mariya Toneva)
20 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Neural networks optimized with generic learning objectives acquire representations that support remarkable behavioural flexibility—from learning from few examples to analogical reasoning—previously seen as uniquely human. While these artificial learning systems simulate how cognitive capacities can emerge through experience, these capacities arise from complex interactions between architecture, learning algorithm, and training data that we struggle to interpret and validate, limiting the value of neural networks as scientific models of cognition. My research addresses this epistemic challenge by connecting high-level computational properties of neural systems to their low-level mechanistic details, ...
Neural networks optimized with generic learning objectives acquire representations that support remarkable behavioural flexibility—from learning from few examples to analogical reasoning—previously seen as uniquely human. While these artificial learning systems simulate how cognitive capacities can emerge through experience, these capacities arise from complex interactions between architecture, learning algorithm, and training data that we struggle to interpret and validate, limiting the value of neural networks as scientific models of cognition. My research addresses this epistemic challenge by connecting high-level computational properties of neural systems to their low-level mechanistic details, making these systems more interpretable and manipulable for science and practice alike. I will present two case studies demonstrating this approach: how meta-learning in neural networks can be reinterpreted through the lens of hierarchical Bayesian inference, and how sparse representations can emerge naturally through the dynamics of learning in neural networks. Through these examples, I'll illustrate how interpreting and analyzing neural networks sheds light on their emergent computational properties, laying the groundwork for a more productive account of how cognitive capacities arise in neural systems.
Read more

On Fairness, Invariance and Memorization in Machine Decision and Deep Learning Algorithms

Till Speicher Max Planck Institute for Software Systems
24 Feb 2025, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
As learning algorithms become more capable, they are used to tackle an increasingly large spectrum of tasks. Their applications range from understanding images, speech and natural language to making socially impactful decisions, such as about people's eligibility for loans and jobs. Therefore, it is important to better understand both the consequences of algorithmic decisions and the mechanisms by which algorithms arrive at their outputs. Of particular interest in this regard are fairness when algorithmic decisions impact people's lives and the behavior of deep learning algorithms, ...
As learning algorithms become more capable, they are used to tackle an increasingly large spectrum of tasks. Their applications range from understanding images, speech and natural language to making socially impactful decisions, such as about people's eligibility for loans and jobs. Therefore, it is important to better understand both the consequences of algorithmic decisions and the mechanisms by which algorithms arrive at their outputs. Of particular interest in this regard are fairness when algorithmic decisions impact people's lives and the behavior of deep learning algorithms, the most powerful but also opaque type of learning algorithm. To this end, this thesis makes two contributions: First, we study fairness in algorithmic decision-making. At a conceptual level, we introduce a metric for measuring unfairness in algorithmic decisions based on inequality indices from the economics literature. We show that this metric can be used to decompose the overall unfairness for a given set of users into between- and within-subgroup components and highlight potential tradeoffs between them, as well as between fairness and accuracy. At an empirical level, we demonstrate the necessity for studying fairness in algorithmically controlled systems by exposing the potential for discrimination that is enabled by Facebook's advertising platform. In this context, we demonstrate how advertisers can target ads to exclude users belonging to protected sensitive groups, a practice that is illegal in domains such as housing, employment and finance, and highlight the necessity for better mitigation methods.

The second contribution of this thesis is aimed at better understanding the mechanisms governing the behavior of deep learning algorithms. First, we study the role that invariance plays in learning useful representations. We show that the set of invariances possessed by representations is of critical importance in determining whether they are useful for downstream tasks, more important than many other factors commonly considered to determine transfer performance. Second, we investigate memorization in large language models, which have recently become very popular. By training models to memorize random strings, we uncover a rich and surprising set of dynamics during the memorization process. We find that models undergo two phases during memorization, that strings with lower entropy are harder to memorize, that the memorization dynamics evolve during repeated memorization and that models can recall tokens in random strings with only a very restricted amount of information.
Read more

Recent events

Foundation models of human cognition

Marcel Binz University of Marburg (hosted by Yixin Zou)
13 Feb 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this -- it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. ...
Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this -- it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. Furthermore, I outline my vision for how to use such behaviorally predictive models to advance our understanding of human cognition, as well as how they can be scaled to naturalistic environments.
Read more

Measuring and Improving Fairness and Resilience Across the Blockchain Ecosystem

Lioba Heimbach ETH Zurich (hosted by Krishna Gummadi)
12 Feb 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. ...
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. In the first part of this talk, I will explore how financial value originating at the application layer can threaten the consensus layer, focusing on non-atomic arbitrage — arbitrage between on-chain and off-chain exchanges. I will demonstrate how this value, despite originating at the application layer, introduces centralizing forces and security vulnerabilities in the consensus layer. In the second part, I will show how these dynamics operate in the opposite direction. Specifically, I will highlight privacy flaws in Ethereum’s P2P network that threaten the consensus layer by enabling attacks targeting application layer value. As I will demonstrate, the P2P network leaks validator locations. This vulnerability allows malicious validators (i.e., consensus layer participants) to launch targeted attacks on validators handling blocks with significant application layer value and scoop the value from those blocks.
Read more

How Do We Evaluate and Mitigate AI Risks?

Maksym Andriushchenko EPFL (hosted by Christof Paar)
11 Feb 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, ...
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, my work aims to provide effective safeguards for AI deployment. Ultimately, I advocate for a systematic approach to AI risk mitigation that integrates technical solutions with real-world considerations to ensure the safe and responsible use of AI systems.
Read more

Archive