Upcoming events

Cracking System Challenges in Optical Data Center Networks

Yiting Xia MPI-INF - RG 2
05 Mar 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Optical data center networks (DCNs) are transforming cloud infrastructure, yet current architectures remain closed ecosystems tightly bound to specific optical hardware. In this talk, we unveil an innovative open framework that decouples software from hardware, empowering researchers and practitioners to freely explore and deploy diverse software solutions across multiple optical platforms. Building on this flexible foundation, we tackle three critical system challenges—time synchronization, routing, and transport protocols—to enable optical DCNs to achieve nanosecond-precision, high throughput, and ultra-low latency. ...
Optical data center networks (DCNs) are transforming cloud infrastructure, yet current architectures remain closed ecosystems tightly bound to specific optical hardware. In this talk, we unveil an innovative open framework that decouples software from hardware, empowering researchers and practitioners to freely explore and deploy diverse software solutions across multiple optical platforms. Building on this flexible foundation, we tackle three critical system challenges—time synchronization, routing, and transport protocols—to enable optical DCNs to achieve nanosecond-precision, high throughput, and ultra-low latency. This presentation highlights the fundamental design shifts brought by optical DCNs and demonstrates how our breakthrough solutions surpass traditional DCN performance, setting new standards for future cloud networks.
Read more

Efficient and Responsible Data Privacy

Tamalika Mukherjee Purdue University (hosted by Yixin Zou)
10 Mar 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
Collecting user data is crucial for advancing machine learning, social science, and government policies, but the privacy of the users whose data is being collected is a growing concern. Organizations often deal with a massive volume of user data on a regular basis — the storage and analysis of such data is computationally expensive. Thus developing algorithms that not only preserve formal privacy but also perform efficiently is a challenging and important necessity. Since preserving privacy inherently involves some data distortion which potentially sacrifices accuracy for smaller populations, ...
Collecting user data is crucial for advancing machine learning, social science, and government policies, but the privacy of the users whose data is being collected is a growing concern. Organizations often deal with a massive volume of user data on a regular basis — the storage and analysis of such data is computationally expensive. Thus developing algorithms that not only preserve formal privacy but also perform efficiently is a challenging and important necessity. Since preserving privacy inherently involves some data distortion which potentially sacrifices accuracy for smaller populations, a complementary challenge is to develop responsible privacy practices that ensure that the resulting privacy implementations are equitable. My talk will focus on Differential Privacy (DP) --- a rigorous mathematical framework that preserves the privacy of individuals in the input dataset, and explore the nuanced landscape of privacy-preserving algorithms through three interconnected perspectives: the systematic design of both time and space-efficient private algorithms, and strategic approaches to creating equitable privacy practices.
Read more

Improving Trustworthiness in Foundation Models: Assessing, Mitigating, and Analyzing ML Risks

Chulin Xie University of Illinois Urbana-Champaign (hosted by Jana Hofmann)
12 Mar 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
As machine learning (ML) models continue to scale in size and capability, they expand the surface area for safety and privacy risks, raising concerns about model trustworthiness and responsible data use. My research uncovers and mitigates these risks. In this presentation, I will focus on the three cornerstones of trustworthy foundation models and agents: safety, privacy, and generalization. For safety, I will introduce our comprehensive benchmarks designed to evaluate trustworthiness risks in Large Language Models (LLMs) and LLM-based code agents. ...
As machine learning (ML) models continue to scale in size and capability, they expand the surface area for safety and privacy risks, raising concerns about model trustworthiness and responsible data use. My research uncovers and mitigates these risks. In this presentation, I will focus on the three cornerstones of trustworthy foundation models and agents: safety, privacy, and generalization. For safety, I will introduce our comprehensive benchmarks designed to evaluate trustworthiness risks in Large Language Models (LLMs) and LLM-based code agents. For privacy, I will present a solution for protecting data privacy with a synthetic text generation algorithm under differential privacy guarantees. The algorithm requires only LLMs inference API access without model training, enabling efficient safe text sharing. For generalization, I will introduce our study on the interplay between the memorization and generalization of LLMs in logical reasoning during the supervised fine-tuning (SFT) stage. Finally, I will conclude with my future research plan for assessing and improving trustworthiness in foundation model-powered ML systems.
Read more

Reward Design for Reinforcement Learning Agents.

Rati Devidze Max Planck Institute for Software Systems
20 Mar 2025, 11:30 pm - 21 Mar 2025, 12:30 am
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
Reward functions are central in reinforcement learning (RL), guiding agents towards optimal decision-making. The complexity of RL tasks requires meticulously designed reward functions that effectively drive learning while avoiding unintended consequences. Effective reward design aims to provide signals that accelerate the agent’s convergence to optimal behavior. Crafting rewards that align with task objectives, foster desired behaviors, and prevent undesirable actions is inherently challenging. This thesis delves into the critical role of reward signals in RL, highlighting their impact on the agent’s behavior and learning dynamics and addressing challenges such as delayed, ...
Reward functions are central in reinforcement learning (RL), guiding agents towards optimal decision-making. The complexity of RL tasks requires meticulously designed reward functions that effectively drive learning while avoiding unintended consequences. Effective reward design aims to provide signals that accelerate the agent’s convergence to optimal behavior. Crafting rewards that align with task objectives, foster desired behaviors, and prevent undesirable actions is inherently challenging. This thesis delves into the critical role of reward signals in RL, highlighting their impact on the agent’s behavior and learning dynamics and addressing challenges such as delayed, ambiguous, or intricate rewards. In this thesis work, we tackle different aspects of reward shaping. First, we address the problem of designing informative and interpretable reward signals from a teacher’s/expert’s perspective (teacher-driven). Here, the expert, equipped with the optimal policy and the corresponding value function, designs reward signals that expedite the agent’s convergence to optimal behavior. Second, we build on this teacher-driven approach by introducing a novel method for adaptive interpretable reward design. In this scenario, the expert tailors the rewards based on the learner’s current policy, ensuring alignment and optimal progression. Third, we propose a meta-learning approach, enabling the agent to self-design its reward signals online without expert input (agent-driven). This self-driven method considers the agent’s learning and exploration to establish a self-improving feedback loop
Read more

QUIC: A New Fundamental Network Protocol

Johannes Zirngibl MPI-INF - INET
02 Apr 2025, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
QUIC is a UDP-based multiplexed and secure transport protocol that was standardized in 2021 by the IETF. It seeks to replace the traditional TCP/TLS stack by combining functionality from different layers of the ISO/OSI model. Therefore, it reduces overhead and introduces new functionality to better support application protocols, e.g., streams and datagrams. QUIC is the foundation for HTTP/3 and new proxy technologies (MASQUE). It is used for video streaming and considered for other media services.

This talk will introduce the protocol and motivate its relevance. ...
QUIC is a UDP-based multiplexed and secure transport protocol that was standardized in 2021 by the IETF. It seeks to replace the traditional TCP/TLS stack by combining functionality from different layers of the ISO/OSI model. Therefore, it reduces overhead and introduces new functionality to better support application protocols, e.g., streams and datagrams. QUIC is the foundation for HTTP/3 and new proxy technologies (MASQUE). It is used for video streaming and considered for other media services.

This talk will introduce the protocol and motivate its relevance. In the second part, I will provide insights into existing implementations and their performance. Our research shows that QUIC performance varies widely between client and server implementations from 90 Mbit/s to over 6000 Mbit/s. In the second part, I provide an overview about QUIC deployments on the Internet. At least one deployment for 18 different libraries can actually be found on the Internet.

The complexity of the protocol, the diversity of libraries and their usage on the Internet makes QUIC an important research subject.
Read more