News: Research

Francis’ group releases SynDiffix, the world’s most accurate synthetic data generator

December 1, 2023

Paul Francis's Open Diffix project has released SynDiffix, an open-source Python package that generates statistically accurate, privacy preserving synthetic data from structured data. Using SynDiffix, a data owner can safely share data while retaining most of the statistical properties of the original data. Analysts can work with the synthetic data as though it were the original.

Using the novel techniques of sticky noise and range snapping,

...

Paul Francis's Open Diffix project has released SynDiffix, an open-source Python package that generates statistically accurate, privacy preserving synthetic data from structured data. Using SynDiffix, a data owner can safely share data while retaining most of the statistical properties of the original data. Analysts can work with the synthetic data as though it were the original.

Using the novel techniques of sticky noise and range snapping, SynDiffix breaks new ground in data accuracy. It is 10 to 100 times more accurate than the open source tool CTGAN, and 5 to 10 times more accurate than the best commercial synthetic data generators. This makes SynDiffix particularly well suited to descriptive analytics like histograms, heatmaps, averages and standard deviations, column correlations, and so on. Like other tools, however, it can also be used for ML modeling. Francis is hopeful that SynDiffix will find wide practical use as well as motivate more research on synthetic data.

Read more

3+3 papers at LICS and ICALP

April 27, 2023

At ICALP 2023 and LICS 2023, two of the top conferences in logic and automata, there will be 6 papers by SWS researchers.

LICS 2023 (accepted papers: https://lics.siglog.org/lics23/accepted.php )

  • Pascal Bergsträßer and Moses Ganardi. Revisiting Membership Problems in Subclasses of Rational Relations
  • Faraz Ghahremani, Edon Kelmendi and Joël Ouaknine. Reachability in Injective Piecewise Affine Maps
  • Toghrul Karimov, Edon Kelmendi, Joris Nieuwveld, Joël Ouaknine and James Worrell.
...

At ICALP 2023 and LICS 2023, two of the top conferences in logic and automata, there will be 6 papers by SWS researchers.

LICS 2023 (accepted papers: https://lics.siglog.org/lics23/accepted.php )

  • Pascal Bergsträßer and Moses Ganardi. Revisiting Membership Problems in Subclasses of Rational Relations
  • Faraz Ghahremani, Edon Kelmendi and Joël Ouaknine. Reachability in Injective Piecewise Affine Maps
  • Toghrul Karimov, Edon Kelmendi, Joris Nieuwveld, Joël Ouaknine and James Worrell. The Power of Positivity

ICALP 2023 (accepted papers: https://icalp2023.cs.upb.de/accepted-papers/ )

  • Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks and Karol Węgrzycki. Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality
  • Pascal Baumann, Moses Ganardi, Rupak Majumdar, Ramanathan Thinniyam Srinivasan and Georg Zetzsche. Checking Refinement of Asynchronous Programs against Context-Free Specifications
  • George Kenison, Joris Nieuwveld, Joël Ouaknine and James Worrell. Positivity Problems for Reversible Linear Recurrence Sequences

 

Read more

Max Planck researchers publish 6 papers at POPL 2023!

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 6 papers accepted POPL 2023.  This is the sixth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, one Max Planck paper was awarded a 2023 POPL Distinguished Paper Award. Congratulations to all our POPL authors!

  • Conditional Contextual Refinement by Youngju Song,
...

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 6 papers accepted POPL 2023.  This is the sixth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, one Max Planck paper was awarded a 2023 POPL Distinguished Paper Award. Congratulations to all our POPL authors!

  • Conditional Contextual Refinement by Youngju Song, Minki Cho, Dongjae Lee, Chung-Kil Hur, Michael Sammler, Derek Dreyer
  • Context-Bounded Verification of Context-Free Specifications by Pascal Baumann, Moses Ganardi, Rupak Majumdar, Ramanathan S. Thinniyam, Georg Zetzsche
  • DimSum: A Decentralized Approach to Multi-language Semantics and Verification by Michael Sammler, Simon Spies, Youngju Song, Emanuele D’Osualdo, Robbert Krebbers, Deepak Garg, Derek Dreyer.  DISTINGUISHED PAPER
  • Kater: Automating Weak Memory Model Metatheory and Consistency Checking by Michalis Kokologiannakis, Ori Lahav, Viktor Vafeiadis
  • The Path to Durable Linearizability by Emanuele D’Osualdo, Azalea Raad, Viktor Vafeiadis
  • CoqQ: Foundational Verification of Quantum Programs by Li Zhou, Gilles Barthe, Pierre-Yves Strub, Junyi Liu, Mingsheng Ying
Read more

Max Planck researchers publish 11 papers at POPL 2022!

December 14, 2021

Researchers from MPI-SWS have authored a total of 11 papers accepted to POPL 2022 (just under 17% of all accepted papers).  This is the fifth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, two Max Planck papers were awarded a 2022 POPL Distinguished Paper Award. Congratulations to all our POPL authors!

...

Researchers from MPI-SWS have authored a total of 11 papers accepted to POPL 2022 (just under 17% of all accepted papers).  This is the fifth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, two Max Planck papers were awarded a 2022 POPL Distinguished Paper Award. Congratulations to all our POPL authors!

Read more

MPI-SWS research on COVID19 apps covered by the Linux Public Health Foundation

September 10, 2021

MPI-SWS faculty member Elissa Redmiles, along with collaborators Samuel Dooley and Professor John Dickerson from the University of Maryland as well as Professor Dana Turjeman from Reichman University, helped the Louisiana Department of Health advertise their COVID19 contact tracing app. As part of this work, the researchers conducted a randomized, controlled field experiment to provide guidance to other jurisdictions on how to most effectively and ethically advertise these public health tools.The Linux Public Health Foundation has featured their findings as guidance for other jurisdictions looking to advertise their contact tracing apps.

...

MPI-SWS faculty member Elissa Redmiles, along with collaborators Samuel Dooley and Professor John Dickerson from the University of Maryland as well as Professor Dana Turjeman from Reichman University, helped the Louisiana Department of Health advertise their COVID19 contact tracing app. As part of this work, the researchers conducted a randomized, controlled field experiment to provide guidance to other jurisdictions on how to most effectively and ethically advertise these public health tools.The Linux Public Health Foundation has featured their findings as guidance for other jurisdictions looking to advertise their contact tracing apps. This work is part of a larger project Redmiles leads on ethical adoption of COVID 19 apps: https://covidadoptionproject.mpi-sws.org/.

Read more

MPI-SWS research featured in Rolling Stone, El Pais, and NetzPolitik

August 26, 2021

MPI-SWS faculty member Elissa Redmiles was quoted in Rolling StoneEl Pais -- Spain's largest newspaper -- and in NetzPolitik regarding ongoing research in collaboration with MPI-SWS group member Vaughn Hamilton and MPI-SWS Intern Hanna Barakat (also at Brown University) on the shift toward digital sex work as a result of COVID19, as well as her work with collaborators Catherine Barwulor (Clemson University), Allison McDonald (University of Michigan),

...

MPI-SWS faculty member Elissa Redmiles was quoted in Rolling StoneEl Pais -- Spain's largest newspaper -- and in NetzPolitik regarding ongoing research in collaboration with MPI-SWS group member Vaughn Hamilton and MPI-SWS Intern Hanna Barakat (also at Brown University) on the shift toward digital sex work as a result of COVID19, as well as her work with collaborators Catherine Barwulor (Clemson University), Allison McDonald (University of Michigan), and Eszter Hargittai (University of Zurich) on digital discrimination against European sex workers, originally published in ACM CHI.

Read more

ETAPS dissertation award and CACM article for Ralf Jung and his work on Rust

Ralf Jung's doctoral dissertation on "Understanding and Evolving the Rust Programming Language" has received the ETAPS Doctoral Dissertation Award for 2021. The award is given to the PhD student who has made the most original and influential contribution to the research areas in the scope of the ETAPS conferences, and has graduated in 2021 at a European academic institution. Ralf was advised by MPI-SWS faculty member Derek Dreyer.

A committee of international experts evaluated candidate dissertations with respect to originality,

...

Ralf Jung's doctoral dissertation on "Understanding and Evolving the Rust Programming Language" has received the ETAPS Doctoral Dissertation Award for 2021. The award is given to the PhD student who has made the most original and influential contribution to the research areas in the scope of the ETAPS conferences, and has graduated in 2021 at a European academic institution. Ralf was advised by MPI-SWS faculty member Derek Dreyer.

A committee of international experts evaluated candidate dissertations with respect to originality, relevance, and impact to the field, as well as the quality of writing. The committee found that Dr. Ralf Jung's dissertation is very well-written and makes several highly original contributions in the area of programming language semantics and verification. The committee was also particularly impressed by the dissertation for its technical depth, the quality and quantity of the associated published work, as well as its relevance and impact both in academia and industry.

Ralf's work on Rust was also featured in a recent Communications of the ACM article: Safe Systems Programming in Rust by Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, and Derek Dreyer. The article appeared in the April 2021 issue of CACM, together with a short video about this work produced by ACM.

Read more

Max Planck researchers publish 8 papers at POPL 2021!

January 17, 2021

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the new Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 8 papers accepted to POPL 2021 (over 10% of all accepted papers).  This is the fourth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, two Max Planck papers were awarded a Distinguished Paper Award. Congratulations to all our POPL authors!

...

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the new Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 8 papers accepted to POPL 2021 (over 10% of all accepted papers).  This is the fourth year in a row that MPI-SWS researchers have published 5+ papers in POPL.  Furthermore, two Max Planck papers were awarded a Distinguished Paper Award. Congratulations to all our POPL authors!

MPI-SWS papers:

MPI-SP papers:

Read more

Research Spotlight: Steering Policies in Multi-Agent Collaboration

Ever since the birth of Artificial Intelligence (AI) at the Dartmouth workshop in 1956, researchers have debated about the exact role that AI will play, and should play, in society. While some have envisioned a romanticized version of AI, incorporated into the narratives of 20th century movies, successful AI developments are often closer to J. C. R. Licklider’s vision of AI, which puts an emphasis on a collaborative relationship between humans and AI,

...

Ever since the birth of Artificial Intelligence (AI) at the Dartmouth workshop in 1956, researchers have debated about the exact role that AI will play, and should play, in society. While some have envisioned a romanticized version of AI, incorporated into the narratives of 20th century movies, successful AI developments are often closer to J. C. R. Licklider’s vision of AI, which puts an emphasis on a collaborative relationship between humans and AI, and focuses on hybrid human-AI decision making.

In the Multi-Agent Systems group at MPI-SWS, we study multi-agent sequential decision making using formal frameworks that can capture nuances often presented in human-AI collaborative settings. Specifically, we study different aspects of agent-to-agent interaction in settings where agents share a common goal, but can have different perceptions of reality. The overall goal is to design a more effective AI decision maker that accounts for the behavior of its collaborators, and compensates for their imperfections. To achieve this goal, the AI decision maker can use steering policies to nudge its collaborators to adopt better policies, i.e., policies that lead to an improved joint outcome. In what follows, we summarize some of our recent results related to this agenda.

Accounting for misaligned world-views. An effective way to model behavioral differences between humans and modern AI tools (based on machine learning) is through a model that captures the misalignment in how the agents perceive their environment. Using this approach, we have proposed a new computational model, called Multi-View Decision Process, suitable for modeling two-agent cooperative scenarios in which agents agree on their goals, but disagree on how their actions affect the state of the world [1]. This framework enables us to formally analyze the utility of accounting for the misalignment in agents’ world-views when only one of the agents has a correct model of the world. Our results show that modeling such a misalignment is not only beneficial, but critical. The main takeaway is that to facilitate a more successful collaboration among agents, it is not sufficient to make one agent (more) accurate in its world-view: naively improving the accuracy of one agent can degrade the joint performance unless one explicitly accounts for the imperfections of the other agent. To this end, we have developed an algorithm for finding an approximately optimal steering policy for the agent with the correct world-view.

Adapting to a non-stationary collaborator. In addition to accounting for a misalignment in world-views, decision makers must also account for the effects of their behavior on other agents. Namely, decision makers respond to each other's behavior, leading to behavior which is non-stationary and changes over time. In the context of human-AI collaboration, this might happen if the human agent changes their behavior over time, for example, as it learns to interact with the AI agent. Such non-stationary behavior of the human agent could have a negative impact on the collaboration, and can lead to a substantially worse performance unless the AI agent adapts to the changing behavior of the human agent. We can model this situation with a two-agent setting similar to the one presented above, but which allows agents to change their behavior as they interact over time [2]. The agent with the correct world-view now has to adapt to the non-stationary behavior of its collaborator. We have proposed a learning procedure that has provable guarantees on the joint performance under the assumption that the behavior of the other agent is not abruptly changing over time. We have shown that this assumption is not trivial to relax in that obtaining the same guarantees without this assumption would require solving a computationally intractable problem.

Steering via environment design. The previous two cases consider indirect steering policies for which the agent with the correct model implicitly influences the behavior of its collaborator by acting in the world. A more explicit influence would be obtained if the actions of this agent are directly changing the world-view of its collaborator. In the context of human-AI collaboration, the AI agent could shape the environment to nudge the human agent to adopt a more efficient decision policy. This can be done through reward shaping, i.e., by making some actions more costly for humans in terms of effort, or through dynamics shaping, i.e., by changing the perceived influence that the human’s actions have on the world. In the machine learning terminology, such a steering strategy is nothing else but a form of an adversarial attack of the AI agent (attacker) on the human agent. In our recent work [3], we have characterized how to optimally perform these types of attacks and how costly they are from an attacker’s point of view.

 

References: 

[1] Dimitrakakis, C., Parkes, D.C., Radanovic, G. and Tylkin, P., 2017. Multi-view Decision Processes: The Helper-AI Problem. In Advances in Neural Information Processing Systems.

[2] Radanovic, G., Devidze, R., Parkes, D. and Singla, A., 2019. Learning to Collaborate in Markov Decision Processes. In International Conference on Machine Learning.

[3] Rakhsha, A., Radanovic, G., Devidze, R., Zhu, X. and Singla, A., 2020. Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning. In International Conference on Machine Learning.

Read more

Max Planck researchers publish 17 papers at LICS/ICALP 2020

Researchers from the Max Planck Institute for Software Systems (MPI-SWS), the Max Planck Institute for Informatics (MPI-INF), and the Max Planck Institute for Security and Privacy (MPI-SP) have coauthored 17 papers at the colocated LICS 2020 and ICALP 2020, two of the top conferences in theoretical computer science. LICS is the premier conference on logic in computer science and ICALP is the flagship conference of the European Association for Theoretical Computer Science.

...

Researchers from the Max Planck Institute for Software Systems (MPI-SWS), the Max Planck Institute for Informatics (MPI-INF), and the Max Planck Institute for Security and Privacy (MPI-SP) have coauthored 17 papers at the colocated LICS 2020 and ICALP 2020, two of the top conferences in theoretical computer science. LICS is the premier conference on logic in computer science and ICALP is the flagship conference of the European Association for Theoretical Computer Science.

MPI-SWS papers:

  1. Invariants for Continuous Linear Dynamical Systems. Shaull Almagor, Edon Kelmendi, Joël Ouaknine and James Worrell. ICALP 2020, Track B. [ Video | Paper]
  2. The complexity of bounded context switching with dynamic thread creation. Pascal Baumann, Rupak Majumdar, Ramanathan Thinniyam Srinivasan and Georg Zetzsche. ICALP 2020, Track B. [ Video | Paper ]
  3. Extensions of ω-Regular Languages. Mikołaj Bojańczyk, Edon Kelmendi, Rafał Stefański and Georg Zetzsche. LICS 2020. [ Video | Paper ]
  4. Rational subsets of Baumslag-Solitar groups. Michaël Cadilhac, Dmitry Chistikov and Georg Zetzsche. ICALP 2020, Track B. [ Video | Paper ]
  5. On polynomial recursive sequences. Michaël Cadilhac, Filip Mazowiecki, Charles Paperman, Michał Pilipczuk and Géraud Sénizergues. ICALP 2020, Track B. [ Video | Paper ]
  6. An Approach to Regular Separability in Vector Addition Systems. Wojciech Czerwiński and Georg Zetzsche. LICS 2020. [ Video | Paper ]
  7. The complexity of knapsack problems in wreath products. Michael Figelius, Moses Ganardi, Markus Lohrey and Georg Zetzsche. ICALP 2020, Track B. [ Video | Paper ]
  8. The Complexity of Verifying Loop-free Programs as Differentially Private. Marco Gaboardi, Kobbi Nissim and David Purser. ICALP 2020, Track B. [ Video | Paper ]
  9. On Decidability of Time-bounded Reachability in CTMDPs. Rupak Majumdar, Mahmoud Salamati and Sadegh Soudjani. ICALP 2020, Track B. [ Video | Paper ]

MPI-INF papers:

  1. Scheduling Lower Bounds via AND Subset Sum. Amir Abboud, Karl Bringmann, Danny Hermelin and Dvir Shabtay. ICALP 2020, Track A.  [ Video | Paper ]
  2. Faster Minimization of Tardy Processing Time on a Single Machine. Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay and Philip Wellnitz. ICALP 2020, Track A. [ Video | Paper ]
  3. Hitting Long Directed Cycles is Fixed-Parameter Tractable. Alexander Göke, Dániel Marx and Matthias Mnich. ICALP 2020, Track A. [ Video | Paper ]
  4. A (2 + ε)-Factor Approximation Algorithm for Split Vertex Deletion. Daniel Lokshtanov, Pranabendu Misra, Fahad Panolan, Geevarghese Philip and Saket Saurabh. ICALP 2020, Track A. [ Video | Paper ]
  5. Hypergraph Isomorphism for Groups with Restricted Composition Factors. Daniel Neuen. ICALP 2020, Track A. [ Video | Paper ]
  6. Deterministic Sparse Fourier Transform with an l∞ Guarante. Yi Li and Vasileios Nakos. ICALP 2020, Track A. [ Video | Paper ]

MPI-SP papers:

  1. Deciding Differential Privacy for Programs with Finite Inputs and Outputs. Gilles Barthe, Rohit Chadha, Vishal Jagannath, A. Prasad Sistla and Mahesh Viswanathan. LICS 2020. [ Video | Paper ]
  2. Universal equivalence and majority on probabilistic programs over finite fields. Charlie Jacomme, Steve Kremer and Gilles Barthe. LICS 2020. [ Video | Paper ]
Read more

Redmiles’ research on ethical adoption of COVID19 apps gains international media attention

July 26, 2020
Research by MPI-SWS faculty member Elissa Redmiles and collaborators at Microsoft Research, the University of Zurich, the University of Maryland and Johns Hopkins University was featured in the New York Times, Scientific American (article 1article 2), Wired (article 1article 2), STAT News, and other venues. The articles cover two papers: (1) Redmiles' paper in ACM Digital Government: Research and Practice proposing a framework and empirical validation through a large-scale survey of the attributes of COVID19 apps that may compel users to adopt them, ...
Research by MPI-SWS faculty member Elissa Redmiles and collaborators at Microsoft Research, the University of Zurich, the University of Maryland and Johns Hopkins University was featured in the New York Times, Scientific American (article 1article 2), Wired (article 1article 2), STAT News, and other venues.
The articles cover two papers: (1) Redmiles' paper in ACM Digital Government: Research and Practice proposing a framework and empirical validation through a large-scale survey of the attributes of COVID19 apps that may compel users to adopt them, such as the benefits of the apps both to individual users and to their community, the accuracy with which they detect exposures, potential privacy leaks, and the costs of using the apps; and (2) a preprint paper by Redmiles and her collaborators that develops predictive models of COVID19 app adoption based on an app's level of accuracy and privacy protection.
These works are part of a larger project Redmiles leads on ethical adoption of COVID 19 apps: https://covidadoptionproject.mpi-sws.org/.
Read more

Research Spotlight: Software Engineering for Machine Learning

Due to the impressive advances in Machine Learning and the unlimited availability of data, neural networks are rapidly becoming prevalent in our everyday lives, for instance by assisting in image-classification or decision-making tasks. As a result, there is growing concern regarding the reliability of neural networks in performing these tasks. In particular, it could be disastrous if an autonomous vehicle misclassifies a street sign, or if a recidivism-risk algorithm, which predicts whether a criminal is likely to re-offend,

...

Due to the impressive advances in Machine Learning and the unlimited availability of data, neural networks are rapidly becoming prevalent in our everyday lives, for instance by assisting in image-classification or decision-making tasks. As a result, there is growing concern regarding the reliability of neural networks in performing these tasks. In particular, it could be disastrous if an autonomous vehicle misclassifies a street sign, or if a recidivism-risk algorithm, which predicts whether a criminal is likely to re-offend, is unfair with respect to race.

In the Practical Formal Methods group at MPI-SWS, we have recently focused on applying techniques from Software Engineering, including static analysis and test generation, to validate and verify properties of neural networks, such as robustness and fairness. In the following, we give a brief overview of three research directions we have been pursuing in this setting.

Blackbox Fuzzing of Neural Networks

By now, it is well known that even very subtle perturbations of a correctly classified image, such as a street sign, could cause a neural network to classify the new image differently. Such perturbed images are referred to as adversarial inputs and pose a critical threat to important applications of Machine Learning, like autonomous driving.

In our group, we recently developed DeepSearch [1], a blackbox-fuzzing technique that generates adversarial inputs for image-classification neural networks. Starting from a correctly classified image, DeepSearch strategically mutates its pixels such that the resulting image is more likely to be adversarial. By using spatial regularities of images, DeepSearch is able to generate adversarial inputs, while only querying the neural network very few times, which entails increased performance of our technique. Moreover, through a refinement step, DeepSearch further reduces the already subtle pixel perturbations of an adversarial input.

Adversarial-Input Detection for Neural Networks

To protect neural networks against adversarial inputs, we have developed RAID [2], a runtime-monitoring technique for detecting whether an input to a neural network is adversarial. Our technique consists of training a secondary classifier to identify differences in neuron activation values between correctly classified and adversarial inputs. RAID is effective in detecting adversarial inputs across a wide range of adversaries even when it is completely unaware of the type of adversary. In addition, we show that there is a simple extension to RAID that allows it to detect adversarial inputs even when these are generated by an adversary that has access to our detection mechanism.

Fairness Certification of Neural Networks

Several studies have recently raised concerns about the fairness of neural networks. To list a few examples, commercial recidivism-risk and health-care systems have been found to be racially biased. There is also empirical evidence of gender bias in image searches, for instance when searching for “CEO”. And facial-recognition systems, which are increasingly used in law enforcement, have been found biased with respect to both gender and race. Consequently, it is critical that we design tools and techniques for certifying fairness of neural networks or characterizing their bias.

We make an important step toward meeting these needs by designing the LIBRA static-analysis framework [3] for certifying causal fairness of neural networks used for classification of tabular data. In particular, given input features considered sensitive to bias, a neural network is causally fair if its output classification is not affected by different values of the sensitive features. On a high level, our approach combines a forward and a backward static analysis. The forward pass aims to divide the input space into independent partitions such that the backward pass is able to effectively determine fairness of each partition. For the partitions where certification succeeds, LIBRA provides definite (in contrast to probabilistic) fairness guarantees; otherwise, it describes the input space for which bias occurs. We have designed this approach to be sound and configurable with respect to scalability and precision, thus enabling pay-as-you-go fairness certification.

References

[1] Fuyuan Zhang, Sankalan Pal Chowdhury and Maria Christakis. DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks. CoRR abs/1910.06296, 2019.

[2] Hasan Ferit Eniser, Maria Christakis and Valentin Wüstholz. RAID: Randomized Adversarial-Input Detection for Neural Networks. CoRR abs/2002.02776, 2020.

[3] Caterina Urban, Maria Christakis, Valentin Wüstholz and Fuyuan Zhang. Perfectly Parallel Fairness Certification of Neural Networks. CoRR abs/1912.02499, 2019.

Read more

Three MPI-SWS papers accepted at AAAI 2020

February 28, 2020

The following three MPI-SWS papers have been accepted to AAAI 2020, one of the flagship conferences in artificial intelligence:

  • Incremental Fairness in Two-Sided Market Platforms: On Smoothly Updating Recommendations by Gourab K. PatroAbhijnan ChakrabortyNiloy GangulyKrishna P. Gummadi.
  • Regression Under Human Assistance by Abir De, Paramita Koley, Niloy Ganguly, Manuel Gomez-Rodriguez.
  • The Effectiveness of Peer Prediction in Long-Term Forecasting by Debmalya Mandal, Goran RadanovicDavid C. Parkes.

Article on the failure of Differential Privacy reaches 1000 views

January 31, 2020

On January 9, 2020, MPI-SWS faculty member Paul Francis published the article Dear Differential Privacy: Put Up or Shut Up, on Medium. The article, which has now reached 1000 views, describes the failure of Differential Privacy as the basis for data protection in the Facebook / Social Sciences One project.

The Facebook / Social Sciences One project is an attempt to release Facebook data on URL sharing to researchers so as to better understand the role of Facebook in influencing elections.

...

On January 9, 2020, MPI-SWS faculty member Paul Francis published the article Dear Differential Privacy: Put Up or Shut Up, on Medium. The article, which has now reached 1000 views, describes the failure of Differential Privacy as the basis for data protection in the Facebook / Social Sciences One project.

The Facebook / Social Sciences One project is an attempt to release Facebook data on URL sharing to researchers so as to better understand the role of Facebook in influencing elections. The project raised 11 million dollars from private funders, and research grants were awarded to twelve research teams around the world. Facebook decided to use Differential Privacy as the means of anonymizing the data. After one year, however, Facebook had not supplied the data. When the funders threatened to pull the funding, Facebook did release a dataset, but the quality of the data was so poor that the proposed research could not be done.

Francis' article describes how and why the data release failed, discusses the shortcomings of Differential Privacy, and calls on the privacy research community to expand the scope of what passes for valid data anonymity research.

Read more

Research Spotlight: Logic and Learning

January 27, 2020

Software systems have become ubiquitous in our modern world and, consequently, so have bugs and glitches. While many software failures are harmless and often merely annoying, some can have catastrophic consequences. Just imagine the dire results of an autonomous car failing to stop at a red traffic light or a plane's control system becoming unresponsive during takeoff or landing.

In our research, we address these problems and develop intelligent tools that help engineers to build safe and reliable hardware,

...

Software systems have become ubiquitous in our modern world and, consequently, so have bugs and glitches. While many software failures are harmless and often merely annoying, some can have catastrophic consequences. Just imagine the dire results of an autonomous car failing to stop at a red traffic light or a plane's control system becoming unresponsive during takeoff or landing.

In our research, we address these problems and develop intelligent tools that help engineers to build safe and reliable hardware, software, and cyber-physical systems. To this end, we employ a unique and promising strategy, which has recently also regained major attention in the artificial intelligence community: combining inductive techniques from the area of machine learning and deductive techniques from the area of mathematical logic (e.g., see the recent Dagstuhl seminar on "Logic and Learning", which was co-organized by one of our group members). Specifically, our research revolves around three topics, to which the remainder of this article is devoted: verification, synthesis, and formal specification languages.

Verification

Verification is an umbrella term referring to tools and techniques that formally prove that a given system satisfies its specification. In the context of software, a popular approach is deductive verification. The idea is easy to describe: first, the given program is augmented with annotations (typically loop invariants, pre-/post-conditions of method calls, and shape properties of data structures), which capture the developer's intent and facilitate the deductive reasoning in a later step; second, the program, together with its annotations, is translated into formulas in a suitable logic, called verification conditions; third, the verification conditions are checked for validity using automated theorem provers. Thanks to brilliant computer scientists, such as Edsger Dijkstra and Tony Hoare, as well as recent advances in constraint solving, the latter two steps can be (almost) entirely automated. However, the first step still remains a manual, error-prone task that requires significant training, experience, and ingenuity. In fact, this is one of the main obstacles preventing a widespread adaptation of formal verification in practice.

To also automate the challenging first step, we have developed a novel approach, called ICE learning [1], which intertwines inductive and deductive reasoning. The key idea is to pit a (deductive) program verifier against an (inductive) learning algorithm, whose goal is to infer suitable annotations from test-runs of the program and failed verification attempts. The actual learning proceeds in rounds. In each round, the learning algorithm proposes candidate annotations based on the data it has gathered so far. The program verifier then tries to prove the program correct using the proposed annotations. If the verification fails, the verifier reports data back to the learning algorithm explaining why the verification has failed. Based on this new information, the learning algorithm refines its conjecture and proceeds to the next round. The loop stops once the annotations are sufficient to prove the program correct.

ICE learning has proven to be a very powerful approach that allows fully automatic verification of a wide variety of programs, ranging from recursive and concurrent programs over numeric data types [1], to algorithms manipulating dynamically allocated data structures [2], to industry-size GPU kernels [3]. In addition, the principles underlying ICE learning can be lifted to other challenging verification tasks, such as the verification of parameterized systems [4] as well as—in ongoing research—to the verification of cyber-physical and AI-based systems. You might want to try a demo immediately in your browser.

Synthesis

Synthesis goes beyond verification and could be considered the holy grail of computer science. In contrast to checking whether a hand-crafted program meets its specification, the dream is to fully automatically generate software (or a circuit for that matter) from specifications in a correct-by-construction manner.

Although this dream is unrealistic in its whole generality, there exist various application domains in which automated synthesis techniques have been applied with great success. In our own research, for instance, we have developed techniques for synthesizing safety controllers for reactive, cyber-physical systems that have to interact with a complex–and perhaps only partially known–environment [5, 6]. Moreover, we have proposed a general framework for generating loop-free code from input-output examples and specifications written in first-order logic [7]. Similar to ICE learning, these methods combine inductive and deductive reasoning, thereby unveiling and exploiting synergies of modern machine learning algorithms and highly-optimized symbolic reasoning engines.

Formal Specification Languages

Both verification and synthesis rely on the ability to write correct formal specifications, which have to precisely capture the engineer’s intuitive understanding of the system in question. In practice, however, formalizing the requirements of a system is notoriously difficult, and it is well known that the use of standard formalisms such as temporal logics requires a level of sophistication that many users might never develop.

We have recently started a new research project to combat this serious obstacle. Its main objective is to design algorithms that learn formal specifications in interaction with human engineers. As a first step towards this goal, we have developed a learning algorithm for the specification language “Linear Temporal Logic (LTL)”, which is the de facto standard in many verification and synthesis applications. You might think of this algorithm as a recommender system for formal specifications: the human engineer provides examples of the desired and undesired behavior of the system in question, while the recommender generates a series of LTL specifications that are consistent with the given examples; the engineer can then either chose one of the generated specifications or provide additional examples and rerun the recommender.

In ongoing research, we are extending our learning algorithm to a wide range of other specification languages, including Computational Tree Logic, Signal Temporal Logic, and the Property Specification Language. Moreover, we are developing feedback mechanisms that allow for a tighter integration of the human engineer into the loop. Again, you can try our technology immediately in your browser.

References

[1] D’Souza, Deepak; Ezudheen, P.; Garg, Pranav; Madhusudan, P.; Neider, Daniel: Horn-ICE Learning for Synthesizing Invariants and Contracts. In: Proceedings of the ACM on Programming Languages (PACMPL), volume 2 issue OOPSLA, pages 131:1–131:25. ACM, 2018.

[2] Neider, Daniel; Madhusudan, P.; Garg, Pranav; Saha, Shambwaditya; Park, Daejun: Invariant Synthesis for Incomplete Verification Engines. In: 24th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2018), volume 10805 of Lecture Notes in Computer Science, pages 232–250. Springer, 2018

[3] Neider, Daniel; Saha, Shambwaditya; Garg, Pranav; Madhusudan, P.: Sorcar: Property-Driven Algorithms for Learning Conjunctive Invariants. In: 26th International Static Analysis Symposium (SAS 2019), volume 11822 of Lecture Notes in Computer Science, pages 323–346. Springer, 2019

[4] Neider, Daniel; Jansen, Nils: Regular Model Checking Using Solver Technologies and Automata Learning. In: 5th International NASA Formal Method Symposium (NFM 2013), volume 7871 of Lecture Notes in Computer Science, pages 16–31. Springer, 2013

[5] Neider, Daniel; Topcu, Ufuk: An Automaton Learning Approach to Solving Safety Games over Infinite Graphs. In: 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016), volume 9636 of Lecture Notes in Computer Science, pages 204–221. Springer, 2016

[6] Neider, Daniel; Markgraf, Oliver: Learning-based Synthesis of Safety Controllers. In: 2019 International Conference on Formal Methods in Computer Aided Design (FMCAD 2019), pages 120–128. IEEE, 2019

[7] Neider, Daniel; Saha, Shambwaditya; Madhusudan, P.: Synthesizing Piece-wise Functions by Learning Classifiers. In: 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016), volume 9636 of Lecture Notes in Computer Science, pages 186–203. Springer, 2016

[8] Neider, Daniel; Gavran, Ivan: Learning Linear Temporal Properties. In: 2018 International Conference on Formal Methods in Computer Aided Design (FMCAD 2018), pages 148–157. IEEE, 2018

Read more

Max Planck researchers publish 8 papers at POPL 2020 + a new record!

November 13, 2019

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the new Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 8 papers accepted to POPL 2020 (over 10% of all accepted papers).  This is the third year in a row that MPI-SWS researchers have published 5 papers in POPL.  Furthermore, MPI-SWS faculty member Derek Dreyer is the first person ever to publish 4 papers in a single POPL. 

...

Researchers from the Max Planck Institute for Software Systems (MPI-SWS) and the new Max Planck Institute for Security and Privacy (MPI-SP) have authored a total of 8 papers accepted to POPL 2020 (over 10% of all accepted papers).  This is the third year in a row that MPI-SWS researchers have published 5 papers in POPL.  Furthermore, MPI-SWS faculty member Derek Dreyer is the first person ever to publish 4 papers in a single POPL.  Congratulations to all our POPL authors!

MPI-SWS papers:

  • The Future is Ours: Prophecy Variables in Separation Logic. Ralf Jung, Rodolphe Lepigre, Gaurav Parthasarathy, Marianna Rapoport, Amin Timany, Derek Dreyer, Bart Jacobs. [pdf]
  • The High-Level Benefits of Low-Level Sandboxing. Michael Sammler, Deepak Garg, Derek Dreyer, Tadeusz Litak. [pdf]
  • Persistency Semantics of the Intel-x86 Architecture. Azalea Raad, John Wickerson, Gil Neiger, Viktor Vafeiadis. [pdf]
  • RustBelt Meets Relaxed Memory. Hoang-Hai Dang, Jacques-Henri Jourdan, Jan-Oliver Kaiser, Derek Dreyer. [pdf]
  • Stacked Borrows: An Aliasing Model for Rust. Ralf Jung, Hoang-Hai Dang, Jeehoon Kang, Derek Dreyer. [pdf]

MPI-SP papers:

  • Formal Verification of a Constant-Time Preserving C Compiler. Gilles Barthe, Sandrine Blazy, Benjamin Gregoire, Rémi Hutin, Vincent Laporte, David Pichardie, Alix Trieu. [pdf]
  • A Probabilistic Separation Logic. Gilles Barthe, Justin Hsu, Kevin Liao. [pdf]
  • Relational Proofs for Quantum Programs. Gilles Barthe, Justin Hsu, Mingsheng Ying, Nengkun Yu, Li Zhou. [pdf]
Read more

Three MPI-SWS papers accepted at NeurIPS 2019

September 26, 2019

The following three MPI-SWS papers have been accepted to NeurIPS 2019, the flagship conference in machine learning:

  • Teaching Multiple Concepts to a Forgetful Learner
  • Learner-aware Teaching: Inverse Reinforcement Learning with Preferences and Constraints
  • Preference-Based Batch and Sequential Teaching: Towards a Unified View of Models

 

Three MPI-SWS Papers at ECRTS’19

July 9, 2019

MPI-SWS researchers, in collaboration with colleagues at TU Delft, the CISTER Research Centre at Polytechnic Institute of Porto, University of Saarland, Bosch Corporate Research, and Scuola Superiore Sant’Anna in Pisa, Italy, are proud to present three papers at this year's Euromicro Conference on Real-Time Systems (ECRTS) in Stuttgart, Germany. ECRTS is one of the three top-ranked conferences on real-time systems (according to Google Scholar Metrics, it is ranked number one).

  • D.
...

MPI-SWS researchers, in collaboration with colleagues at TU Delft, the CISTER Research Centre at Polytechnic Institute of Porto, University of Saarland, Bosch Corporate Research, and Scuola Superiore Sant’Anna in Pisa, Italy, are proud to present three papers at this year's Euromicro Conference on Real-Time Systems (ECRTS) in Stuttgart, Germany. ECRTS is one of the three top-ranked conferences on real-time systems (according to Google Scholar Metrics, it is ranked number one).

  • D. Casini, T. Blass, I. Lütkebohle, and B. Brandenburg, “Response-Time Analysis of ROS 2 Processing Chains under Reservation-Based Scheduling”, Proceedings of the 31st Euromicro Conference on Real-Time Systems (ECRTS 2019), pp. 6:1–6:23, July 2019. Open Access PDF
  • A. Gujarati, M. Nasri, R. Majumdar, and B. Brandenburg, “From Iteration to System Failure: Characterizing the FITness of Periodic Weakly-Hard Systems”, Proceedings of the 31st Euromicro Conference on Real-Time Systems (ECRTS 2019), pp. 9:1–9:23, July 2019. Open Access PDF
  • M. Nasri, G. Nelissen, and B. Brandenburg, “Response-Time Analysis of Limited-Preemptive Parallel DAG Tasks under Global Scheduling”, Proceedings of the 31st Euromicro Conference on Real-Time Systems (ECRTS 2019), pp. 21:1–21:23, July 2019. Open Access PDF
Read more

Research Spotlight: Tracing the Behavior of Cloud Applications

Consider the everyday websites and apps that we use: online shops, news websites, search engines, social networks, navigation apps, instant messaging apps, and many more.  Most of these programs don't just run in isolation on our laptops or phones, but instead connect over the internet to backends and databases running in datacenters across the world.  These backends perform a wide range of tasks, including constructing your personalized social network feed, storing and retrieving comments on message boards,

...

Consider the everyday websites and apps that we use: online shops, news websites, search engines, social networks, navigation apps, instant messaging apps, and many more.  Most of these programs don't just run in isolation on our laptops or phones, but instead connect over the internet to backends and databases running in datacenters across the world.  These backends perform a wide range of tasks, including constructing your personalized social network feed, storing and retrieving comments on message boards, and calculating results for your search query.  From our perspective as users, the actions we perform are simple, such as opening the app and loading our personalized profile.  But under the hood, each action usually results in complex processing across many processes and machines in a datacenter.

It has never been easier to write and deploy complex programs like these.  Cloud computing companies who own datacenters (such as Google, Amazon, and Microsoft) will gladly rent out computer services at a touch of a button, on demand.  Using designs like microservices, it is easy for programmers to construct complex programs out of smaller, simpler building blocks.  There are frameworks and open-source software packages to help developers construct big applications out of small pieces, to spread those pieces out over multiple machines in a datacenter, and to have the pieces communicate and interact with each other over the network.

Problems show up when software goes live.  Compared to developing and deploying the software, it is much harder to make sure everything goes smoothly when the software is up and running.  Distributed computer programs have lots of moving pieces, and there are lots of opportunities for things to go wrong.  For example, if one machine in the datacenter has a hardware problem, or the code is buggy, or too many people are trying to access it at once, the effects can be wide-ranging.  It can create a butterfly effect of problems, which we term cascading failures, that can lead to the app or website as a whole becoming slow, or going down entirely.  It's hard for programmers to get to the bottom of these kinds of problems, because there's no single machine or process doing all the work.  A problem that occurs on one machine might manifest as strange symptoms on a different machine later on.  Figuring out the root cause of a problem is challenging, as is anticipating problems in the first place.  Even big internet companies like Facebook and Google experience problems like this today.

These kinds of problems motivate the research of the Cloud Software Systems Research Group at the Max Planck Institute for Software Systems.  We research ways for operators to understand what's going on in their live distributed system, to troubleshoot problems when they occur at runtime, and to design systems that proactively avoid problems.  One approach we take is to design distributed tracing tools that can be used by the system operators.  The goal of distributed tracing is to record information about what a program does while it's running.  The tools record events, metrics, and performance counters, which together expose the current state and performance of the system, and how it changes over time.  A key additional step taken by distributed tracing tools is to record the causal ordering of events happening in the system — that is, the interactions and dependencies between machines and processes.  Causal ordering is is very useful for diagnosing problems that span multiple processes and machines, especially when there might be lots of concurrent, unrelated activity going on at the same time.  It lets us reconstruct the end-to-end execution paths of requests, across all components and machines, and then reason about the sequence of conditions and events that led up to a problem.  Without causal ordering, this information is missing, and pinpointing the root cause of a problem would be like searching for a needle in a haystack.

The Cloud Software Systems Research Group has looked at a number of challenges in making distributed tracing tools efficient, scalable, and more widely deployable. In our recent work, we have thought about how you can efficiently insert instrumentation to record entirely new information, into an already-running system, without having to rebuild or restart the system [1].  We have looked at problems in dealing with the large volume of data generated by distributed tracing tools, and deciding which data is most valuable to keep if there's not enough room to keep it all [2].  We have also considered the implications of distributed tracing at extremely large scale, and how to efficiently collect, aggregate, and process tracing data in real-time [3].

In our ongoing work, we are investigating ways for the data recorded by tracing tools to feed back in to decisions made by datacenter infrastructure, such as resource management, scheduling, and load balancing.  We are also considering new challenges that arise in scalable data analysis: how do you analyze large datasets of traces and derive insights about aggregate system behavior?  One approach we are exploring uses techniques in representational machine learning, to transform richly annotated tracing data into a more tractable form for interactive analysis.  More broadly, our group investigates a variety of approaches besides just distributed tracing tools, including ways to better design and develop the distributed systems in the first place.  Ultimately, our goal is to make modern cloud systems easier to operate, understand, and diagnose.

References

[1] Jonathan Mace, Ryan Roelke, and Rodrigo Fonseca.  Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems.  In Proceedings of the 25th ACM Symposium on Operating Systems Principles (SOSP '15), 2015.

[2] Pedro Las-Casas, Jonathan Mace, Dorgival Guedes, and Rodrigo Fonseca.  Weighted Sampling of Execution Traces: Capturing More Needles and Less Hay.  In Proceedings of the 9th ACM Symposium on Cloud Computing (SoCC'18), 2018.

[3] Jonathan Kaldor, Jonathan Mace, Michał Bejda, Edison Gao, Wiktor Kuropatwa, Joe O'Neill, Kian Win Ong, Bill Schaller, Pingjia Shan, Brendan Viscomi, Vinod Venkataraman, Kaushik Veeraraghavan, Yee Jiun Song.  Canopy: An End-to-End Performance Tracing And Analysis System. In Proceedings of the 26th ACM Symposium on Operating Systems Principles (SOSP '17), 2017.

Read more

Research Spotlight: A General Data Anonymity Measure

January 30, 2019

A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years---ever since the first data processing systems were developed.

...

A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years---ever since the first data processing systems were developed. Most workable solutions are ad hoc: practitioners try things like removing personally identifying information (e.g. names and addresses), aggregating data, removing outlying data, and even swapping some data between individuals. This process can work reasonably well, but it is time-consuming, requires substantial expertise to get right, and invariably limits the accuracy of the analysis or the types of analysis that can be done.

A holy grail within computer science is to come up with an anonymization system that has formal guarantees of anonymity and provides good analytics. Most effort in this direction has focused on two ideas, K-anonymity and Differential Privacy. Both can provide strong anonymity, but except in rare cases neither can do so while still providing adequate analytics. As a result, common practice is still to use informal ad hoc techniques with weak anonymization, and to mitigate risk by for instance sharing data only with trusted partners.

The European Union has raised the stakes with the General Data Protection Regulation (GDPR). The GDPR has strict rules on how personal data may be used, and threatens large fines to organizations that do not follow the rules. However, GDPR says that if data is anonymous, then it is not considered personal and does not fall under the rules. Unfortunately, there are no precise guidelines on how to determine if data is anonymous or not. Member states are expected to come up with certification programs for anonymity, but do not know how to do so.

This is where we come in. Paul Francis' group, in research partnership with the startup Aircloak, has been developing an anonymizing technology called Diffix over the last five years. Diffix is an empirical, not a formal technology, and so the question remains "how anonymous is Diffix?" While it may not be possible to precisely answer that question, one way we try to answer that question is through a bounty program: we pay attackers who can demonstrate how to compromise anonymity in our system. Last year we ran the first (and still only) bounty program for anonymity. The program was successful in that some attacks were found, and in the process of designing defensive measures, Diffix has improved.

In order to run the bounty program, we naturally needed a measure of anonymity so that we could decide how much to pay attackers. We designed a measure based on how much confidence an attacker has in a prediction of an individual's data values, among other things. At some point, we realized that our measure applies not just to attacks on Diffix, but to any anonymization system. We also realized that our measure might be useful in the design of certification programs for anonymity.

We decided to develop a general score for anonymity, and to build tools that would allow anyone to apply the measure to any anonymization technology. The score is called the GDA Score, for General Data Anonymity Score.

The primary strength of the GDA Score is that it can be applied to any anonymization method, and therefore is apples-to-apples. The primary weakness is that it is based on empirical attacks (real attacks against real systems), and therefore the score is only as good as the attacks themselves. If there are unknown attacks on a system, then the score won't reflect this and may therefore make a system look more anonymous than it is.

Our hope is that over time enough attacks will be developed that we can have high confidence in the GDA Score. Towards that end, we've started the Open GDA Score Project. This is a community effort to provide software and databases in support of developing new attacks, and a repository where the scores can be viewed. We recently launched the project in the form of a website, www.gda-score.org, and some initial tools and simple attacks. We will continue to develop tools and new attacks, but our goal is to attract broad participation from the community.

For more information, visit www.gda-score.org.

Read more

MPI-SWS article published in the Proceedings of Academy of Sciences (PNAS)

January 24, 2019

The article "Enhancing Human Learning via spaced repetition optimization", coauthored by MPI-SWS and MPI-IS researchers, has been published in the Proceedings of the National Academy of Sciences (PNAS), a highly prestigious journal.

The (open-access) article can be found here: https://www.pnas.org/content/early/2019/01/18/1815156116.

Five MPI-SWS papers at POPL 2019!

January 3, 2019
Just as in 2018, MPI-SWS researchers again authored a total of five POPL papers in 2019:
  • Bridging the Gap Between Programming Languages and Hardware Weak Memory Models by Anton Podkopaev, Ori Lahav, and Viktor Vafeiadis.
  • From Fine- to Coarse-Grained Dynamic Information Flow Control and Back by Marco Vassena, Alejandro Russo, Deepak Garg, Vineet Rajani, and Deian Stefan.
  • Formal verification of higher-order probabilistic programs by Tetsuya Sato, Alejandro Aguirre, Gilles Barthe,
...
Just as in 2018, MPI-SWS researchers again authored a total of five POPL papers in 2019:
  • Bridging the Gap Between Programming Languages and Hardware Weak Memory Models by Anton Podkopaev, Ori Lahav, and Viktor Vafeiadis.
  • From Fine- to Coarse-Grained Dynamic Information Flow Control and Back by Marco Vassena, Alejandro Russo, Deepak Garg, Vineet Rajani, and Deian Stefan.
  • Formal verification of higher-order probabilistic programs by Tetsuya Sato, Alejandro Aguirre, Gilles Barthe, Marco Gaboardi, Deepak Garg, Justin Hsu.
  • Grounding Thin-Air Reads with Event Structures by Soham Chakraborty and Viktor Vafeiadis.
  • On Library Correctness under Weak Memory Consistency by Azalea Raad, Marko Doko, Lovro Rožić, Ori Lahav, and Viktor Vafeiadis.

What's more, the MPI-SWS Software Analysis and Verification group has a whole session to itself at POPL 2019. The weak memory session on Thursday, Jan 17, is comprised of the three papers by Viktor Vafeiadis, his students, postdocs, and collaborators.

Read more

Five MPI-SWS papers accepted at NIPS 2018

October 4, 2018

The following five MPI-SWS papers have been accepted to NIPS 2018, the flagship conference in machine learning:

  • Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
  • Teaching Inverse Reinforcement Learners via Features and Demonstrations
  • Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners
  • Deep Reinforcement Learning of Marked Temporal Point Processes
  • Enhancing the Accuracy and Fairness of Human Decision Making

 

Research Spotlight: Learning to interact with learning agents

September 26, 2018

Many real-world systems involve repeatedly making decisions under uncertainty—for instance, choosing one of the several products to recommend to a user in an online recommendation service, or dynamically allocating resources among available stock options in a financial market. Machine learning (ML) algorithms driving these systems typically operate under the assumption that they are interacting with static components, e.g., users' preferences are fixed, trading tools providing stock recommendations are static, and data distributions are stationary. This assumption is often violated in modern systems,

...

Many real-world systems involve repeatedly making decisions under uncertainty—for instance, choosing one of the several products to recommend to a user in an online recommendation service, or dynamically allocating resources among available stock options in a financial market. Machine learning (ML) algorithms driving these systems typically operate under the assumption that they are interacting with static components, e.g., users' preferences are fixed, trading tools providing stock recommendations are static, and data distributions are stationary. This assumption is often violated in modern systems, as these algorithms are increasingly interacting with and seeking information from learning agents including people, robots, and adaptive adversaries. Consequently, many well-studied ML frameworks and algorithmic techniques fail to provide desirable theoretical guarantees—for instance, algorithms might converge to a sub-optimal solution or fail arbitrarily bad in these settings.

Researchers at the Machine Teaching Group, MPI-SWS are designing novel ML algorithms that have to interact with agents that are adaptive or learning over time, especially in situations when the algorithm's decisions directly affect the state dynamics of these agents. In recent work [1], they have studied the above-mentioned problem in the context of two fundamental machine learning frameworks: (i) online learning using experts' advice and (ii) active learning using labeling oracles. In particular, they consider a setting where experts/oracles themselves are learning agents. For instance, active learning algorithms typically query labels from an oracle, e.g., a (possibly noisy) domain expert; however, in emerging crowd-powered systems, these experts are getting replaced by inexpert participants who could themselves be learning over time (e.g., volunteers in citizen science projects). They have shown that when these experts/oracles themselves are learning agents, well-studied algorithms (like the EXP3 algorithm) fail to converge to the optimal solution and can have arbitrarily bad performance for this new problem setting. Furthermore, they provide an impossibility result showing that without sharing any information across experts, it is impossible to achieve convergence guarantees. This calls for developing novel algorithms with practical ways of coordination between the central algorithm and learning agents to achieve desired guarantees.

Currently, researchers at the Machine Teaching Group are studying these challenges in the context of designing next-generation human-AI collaborative systems. As a concrete application setting, consider a car driving scenario where the goal is to develop an assistive AI agent to drive the car in an auto-pilot mode, but giving control back to the human driver in safety-critical situations. They study this setting by casting it as a multi-agent reinforcement learning problem. When the human agent has a stationary policy (i.e., the actions take by the human driver in different states/scenarios are fixed), it is trivial to learn an optimal policy for the AI agent that maximizes the overall performance of this collaborative system. However, in real-life settings where a human driver would adapt their behavior in response to the presence of an auto-pilot mode, they show that the problem of learning an optimal policy for the AI agent becomes computationally intractable. This work is one of the recent additions to an expanding set of results and algorithmic techniques developed by MPI-SWS researchers in the nascent area of Machine Teaching [2, 3].

References

[1] Adish Singla, Hamed Hassani, and Andreas Krause. Learning to Interact with Learning Agents. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI'18), 2018.

[2] Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N. Rafferty. An Overview of Machine Teaching. arXiv 1801.05927, 2018.

[3] Maya Cakmak, Anna N. Rafferty, Adish Singla, Xiaojin Zhu, and Sandra Zilles. Workshop on Teaching Machines, Robots, and Humans. NIPS 2017.

Read more

OOPSLA’18: Four MPI-SWS papers

September 12, 2018

Four papers to appear at OOPSLA'18 have been (co)-authored by MPI-SWS members.

...

Four papers to appear at OOPSLA'18 have been (co)-authored by MPI-SWS members.

Read more

Paul Francis featured in CNIL interview

June 19, 2018

Paul Francis was featured in an interview by CNIL, the French national data protection authority. The interview discusses the innovative way in which MPI-SWS is tackling the data anonymity problem. The interview follows Paul's visit to CNIL in May 2018, where he presented the first-ever bounty program for anonymity. The bounty program, designed by MPI-SWS and implemented by the startup Aircloak, is one of the innovative ways in which MPI-SWS develops practical data anonymity techniques.

...

Paul Francis was featured in an interview by CNIL, the French national data protection authority. The interview discusses the innovative way in which MPI-SWS is tackling the data anonymity problem. The interview follows Paul's visit to CNIL in May 2018, where he presented the first-ever bounty program for anonymity. The bounty program, designed by MPI-SWS and implemented by the startup Aircloak, is one of the innovative ways in which MPI-SWS develops practical data anonymity techniques.

Read more

Research Spotlight: From Newton to Turing to cyber-physical systems

February 13, 2018

In 1937, a young Englishman by the name of Alan M. Turing published a paper with the obscure title "On computable numbers, with an application to the Entscheidungsproblem'' in the Proceedings of the London Mathematical Society. In doing so, he arguably laid the mathematical foundations of modern computer science. Turing's seminal contribution was to show that the famous Entscheidungsproblem, formulated by the great German mathematician David Hilbert several years earlier,

...

In 1937, a young Englishman by the name of Alan M. Turing published a paper with the obscure title "On computable numbers, with an application to the Entscheidungsproblem'' in the Proceedings of the London Mathematical Society. In doing so, he arguably laid the mathematical foundations of modern computer science. Turing's seminal contribution was to show that the famous Entscheidungsproblem, formulated by the great German mathematician David Hilbert several years earlier, could not be solved: more precisely, Turing proved (in modern parlance) that the problem of determining whether a given computer program halts could not be done algorithmically---in other words that the famous Halting Problem is undecidable.

Although seemingly at the time a rather esoteric concern, the Halting Problem (and related questions) have dramatically gained in importance and relevance in more contemporary times. Fast forward to the 21st Century: nowadays, it is widely acknowledged that enabling engineers, programmers, and researchers to automatically verify and certify the correctness of the computer systems that they design is one of the Grand Challenges of computer science. In increasingly many instances, it is absolutely critical that the software governing various aspects of our daily lives (such as that running on an aircraft controller, for example) behave exactly as intended, lest catastrophic consequences ensue.

What classes of infinite-state programs can be analyzed algorithmically?

Researchers at the Foundations of Algorithmic Verification group are investigating what classes of infinite-state programs can, at least in principle, be fully handled and analyzed algorithmically by viewing computer programs abstractly as dynamical systems, and they seek to design exact algorithms enabling one to fully analyse the behaviour of such systems. In particular, they are presently tackling a range of central algorithmic problems from verification, synthesis, performance, and control for linear dynamical systems, drawing among others on tools from number theory, Diophantine geometry, and algebraic geometry, with the overarching goal of offering a systematic exact computational treatment of various important classes of dynamical systems and other fundamental models used in mathematics, computer science, and the quantitative sciences. Some of their achievements include several decidability and hardness results for linear recurrence sequences, which can be used to model simple loops in computer programs, answering a number of longstanding open questions in the mathematics and computer science literature.

In a series of recent papers [1, 2],  they have attacked the so-called Zero Problem for linear differential equations, i.e., the question of determining algorithmically whether the unique solution to a given linear differential equation has a zero or not. Such equations, which go back as far as Newton, are ubiquitous in mathematics, physics, and engineering; they are also particularly useful to model cyber-physical systems, i.e., digital systems that evolve in and interact with a continuous environment. In their work, they obtained several important partial results: if one is interested in the existence of a zero over a bounded time interval, then it is possible to determine this algorithmically, provided that a certain hypothesis from the mathematical field of number theory, known as Schanuel's Conjecture, is true. They were also able to partially account for the fact that the Zero Problem has hitherto remained open in full generality: indeed, if one were able to solve it in dimension 9 (or higher), then in turn this would enable one to solve various longstanding hard open problems from a field of mathematics known as Diophantine approximation. In doing so, they therefore exhibited surprising and unexpected connections between the modelling and analysis of cyber-physical systems and seemingly completely unrelated deep mathematical theories dealing with questions about whole numbers.

References

[1] Ventsislav Chonev, Joel Ouaknine, and James Worrell. On recurrent reachability for continuous linear dynamical systems. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2016.

[2] Ventsislav Chonev, Joel Ouaknine, and James Worrell. On the Skolem Problem for continuous linear dynamical systems. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming (ICALP), 2016.

Read more

Four MPI-SWS papers accepted at AAAI 2018

February 13, 2018

Four papers from MPI-SWS have been accepted to AAAI 2018:

  • Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
  • Learning to Interact with Learning Agents
  • Information Gathering with Peers: Submodular Optimization with Peer-Prediction Constraints
  • Learning User Preferences to Incentivize Exploration in the Sharing Economy

Three MPI-SWS papers accepted at WWW 2018

February 13, 2018

Three papers from MPI-SWS have been accepted to the 2018 Web Conference:

  • Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
  • On the Causal Effect of Badges
  • Fake News Detection in Social Networks via Crowd Signals

POPLpalooza: Five MPI-SWS papers at POPL 2018 + a new record!

January 18, 2018

In 2018, MPI-SWS researchers authored a total of five POPL papers:

  • Parametricity versus the Universal Type. Dominique Devriese, Marco Patrignani, Frank Piessens.
  • Effective Stateless Model Checking for C/C++ Concurrency. Michalis Kokologiannakis, Ori Lahav, Kostis Sagonas, Viktor Vafeiadis.
  • Monadic refinements for relational cost analysis. Ivan Radicek, Gilles Barthe, Marco Gaboardi, Deepak Garg, Florian Zuleger.
  • Why is Random Testing Effective for Partition Tolerance Bugs? Rupak Majumdar, Filip Niksic.
...

In 2018, MPI-SWS researchers authored a total of five POPL papers:

  • Parametricity versus the Universal Type. Dominique Devriese, Marco Patrignani, Frank Piessens.
  • Effective Stateless Model Checking for C/C++ Concurrency. Michalis Kokologiannakis, Ori Lahav, Kostis Sagonas, Viktor Vafeiadis.
  • Monadic refinements for relational cost analysis. Ivan Radicek, Gilles Barthe, Marco Gaboardi, Deepak Garg, Florian Zuleger.
  • Why is Random Testing Effective for Partition Tolerance Bugs? Rupak Majumdar, Filip Niksic.
  • RustBelt: Securing the Foundations of the Rust Programming Language. Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, Derek Dreyer.

Furthermore, with the "RustBelt" paper, MPI-SWS faculty member Derek Dreyer cements a 10-year streak of having at least one POPL paper each year, breaking the all-time record of 9 years previously held by John Mitchell at Stanford. Congratulations Derek!

Read more

Research Spotlight: Teaching machine learning algorithms to be fair

December 7, 2017

Machine learning algorithms are increasingly being used to automate decision making in several domains such as hiring, lending and crime-risk prediction. These algorithms have shown significant promise in leveraging large or “big” training datasets to achieve high prediction accuracy, sometimes surpassing even human accuracy.

Unfortunately, some recent investigations have shown that machine learning algorithms can also lead to unfair outcomes. For example, a recent ProPublica study found that COMPAS,

...

Machine learning algorithms are increasingly being used to automate decision making in several domains such as hiring, lending and crime-risk prediction. These algorithms have shown significant promise in leveraging large or “big” training datasets to achieve high prediction accuracy, sometimes surpassing even human accuracy.

Unfortunately, some recent investigations have shown that machine learning algorithms can also lead to unfair outcomes. For example, a recent ProPublica study found that COMPAS, a tool used in US courtrooms for assisting judges with crime risk prediction, was unfair towards black defendants. In fact, several studies from governments, regulatory authorities, researchers as well as civil rights groups have raised concerns about machine learning potentially acting as a tool for perpetuating existing unfair practices in society, and worse, introducing new kinds of unfairness in prediction tasks. As a consequence, a flurry of recent research has focused on defining and implementing appropriate computational notions of fairness for machine learning algorithms.

Parity-based fairness

Existing computational notions of fairness in the machine learning literature are largely inspired by the concept of discrimination in social sciences and law. These notions require the decision outcomes to ensure parity (i.e. equality) in treatment and in impact.

Notions based on parity in treatment require that the decision algorithm should not take into account the sensitive feature information (e.g., gender, race) of a user. Notions based on parity in impact require that the decision algorithm should give beneficial decision outcomes (e.g., granting a loan) to similar percentages of people from all sensitive feature groups (e.g., men, women).

However, in many cases, these existing notions are too stringent and can lead to unexpected side effects. For example, ensuring parity has been shown to lead to significant reductions in prediction accuracy. Parity may also lead to scenarios where none of the groups involved in decision making (e.g., neither men nor women) get beneficial outcomes. In other words, these scenarios might be preferred neither by the decision maker using the algorithm (due to diminished accuracy), nor by the groups involved (due to very little benefits).

User preferences and fairness

In recent work, to appear at NIPS 2017, researchers at MPI-SWS have introduced two new computational notions of algorithmic fairness: preferred treatment and preferred impact. These notions are inspired by ideas related to envy-freeness and bargaining problem in economics and game theory. Preferred treatment and preferred impact leverage these ideas to build more accurate solutions that are preferable for both the decision maker and the user groups.

The new notion of preferred treatment allows basing the decisions on sensitive feature information (thereby relaxing the parity treatment criterion) as long as the decision outcomes do not lead to envy. That is, each group of users prefers their own group membership over other groups and does not feel that presenting itself to the algorithm as another group would have led to better outcomes for the group.

The new notion of preferred impact allows differences in beneficial outcome rates for different groups (thereby relaxing the parity impact criterion) as long as all the groups get more beneficial outcomes than what they would have received under the parity impact criterion.

In their work, MPI-SWS researchers have developed a technique to ensure machine learning algorithms satisfy preferred treatment and / or preferred impact. They also tested their technique by designing crime-predicting machine-learning algorithms that satisfy the above-mentioned notions. In their experiments, they show that preference-based fairness notions can provide significant gains in overall decision-making accuracy as compared to parity-based fairness, while simultaneously increasing the beneficial outcomes for the groups involved.

This work is one of the most recent additions to an expanding set of techniques developed by MPI-SWS researchers to enable fairness, accountability and interpretability of machine learning algorithms.

References

Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna Gummadi and Adrian Weller. From Parity to Preference: Learning with Cost-effective Notions of Fairness. Neural Information Processing Systems (NIPS), Long Beach (CA, USA), December 2017

Read more

MPI-SWS paper accepted into WSDM ’18

November 2, 2017

The paper "Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation " by MPI-SWS researchers, in collaboration with researchers at KAIST and MPI-IS, has been accepted to WSDM 2018, one of the flagship conferences in data mining.

WSDM will take place in Los Angeles (CA, USA) in February 2018.

MPI-SWS paper accepted into NIPS ’17

September 22, 2017

The paper "From Parity to Preference: Learning with Cost-effective Notions of Fairness" by MPI-SWS researchers, in collaboration with researchers at the University of Cambridge and MPI-IS, has been accepted to NIPS 2017, the flagship conference in machine learning.

NIPS will take place in Long Beach (CA, USA) in December 2017.

MPI-SWS paper accepted into RTSS’17

July 10, 2017

The paper entitled "An Exact and Sustainable Analysis of Non-Preemptive Scheduling" by MPI-SWS researchers Mitra Nasri and Björn Brandenburg was accepted into the program of the 38th IEEE Real-Time Systems Symposium (RTSS 2017).

RTSS 2017 will be held from December 6 to December 8 in Paris, France.

 

Targeted malware paper accepted at NDSS ’17

January 23, 2017

The paper "A Broad View of the Ecosystem of Socially Engineered Exploit Documents" was accepted at NDSS '17 (Network and Distributed System Security Symposium).  The authors include Stevens Le Blond, Cédric Gilbert, Utkarsh Upadhyay, and Manuel Gomez Rodriguez from MPI-SWS, as well as David Choffnes from Northeastern University.

Our understanding of exploit documents as a vector to deliver targeted malware is limited to a handful of studies done in collaboration with the Tibetans,

...

The paper "A Broad View of the Ecosystem of Socially Engineered Exploit Documents" was accepted at NDSS '17 (Network and Distributed System Security Symposium).  The authors include Stevens Le Blond, Cédric Gilbert, Utkarsh Upadhyay, and Manuel Gomez Rodriguez from MPI-SWS, as well as David Choffnes from Northeastern University.

Our understanding of exploit documents as a vector to deliver targeted malware is limited to a handful of studies done in collaboration with the Tibetans, Uyghurs, and political dissidents in the Middle East. In this measurement study, we present a complementary methodology relying only on publicly available data to capture and analyze targeted attacks with both greater scale and depth. In particular, we detect exploit documents uploaded over one year to a large anti-virus aggregator (VirusTotal) and then mine the social engineering information they embed to infer their likely targets and contextual information of the attacks. We identify attacks against two ethnic groups (Tibet and Uyghur) as well as 12 countries spanning America, Asia, and Europe. We then analyze the exploit documents dynamically in sandboxes to correlate and compare the exploited vulnerabilities and malware families targeting different groups. Finally, we use machine learning to infer the role of the uploaders of these documents to VirusTotal (i.e., attacker, targeted victim, or third-party), which enables their classification based only on their metadata, without any dynamic analysis. We make our datasets available to the academic community.

Read more

Five MPI-SWS papers accepted at WWW ’17

December 23, 2016

Five papers from MPI-SWS have been accepted to WWW 2017:

  • Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
  • Modeling the Dynamics of Online Learning Activity
  • Distilling Information Reliability and Source Trustworthiness from Digital Traces
  • Optimizing the Recency-Relevancy Trade-off in Online News Recommendations
  • Predicting the Success of Online Petitions Leveraging Multi-dimensional Time-Series

The 26th International World Wide Web Conference (WWW) will take place in Perth, Australia in April 2017.

Three MPI-SWS papers accepted to POPL’17

October 18, 2016

Three papers from MPI-SWS were accepted to ACM POPL 2017:

  • A promising semantics for relaxed-memory concurrency
  • Relational cost analysis
  • Thread modularity at many levels: a pearl in compositional verification

Two MPI-SWS papers accepted at WSDM’17

October 18, 2016

Two papers from MPI-SWS were accepted to ACM WSDM 2017:

  • RedQueen: An Online Algorithm for Smart Broadcasting in Social Networks
  • Uncovering the Dynamics of Crowdlearning and the Value of Knowledge

Two MPI-SWS papers accepted at RTNS’16

September 7, 2016

Two MPI-SWS papers were accepted into the program of the 24th International Conference on Real-Time Networks and Systems (RTNS 2016):

  • Quantifying the Effect of Period Ratios on Schedulability of Rate Monotonic
  • On the Problem of Finding Optimal Harmonic Periods

RTNS 2016 will be held from October 19 to October 21 in Brest, France.

Two MPI-SWS papers accepted into RTSS’16

September 1, 2016

Two papers of MPI-SWS researchers were accepted into the program of the 37th IEEE Real-Time Systems Symposium (RTSS 2016):

  • A Blocking Bound for Nested FIFO Spin Locks
  • Global Scheduling Not Required: Simple, Near-Optimal Multiprocessor Real-Time Scheduling with Semi-Partitioned Reservations

RTSS 2016 will be held from November 29 until December 2 in Porto, Portugal.

MPI-SWS research in the New York Times

September 1, 2014

MPI-SWS faculty Cristian Danescu-Niculescu-Mizil's work on linguistic change was mentioned in The New York Times. This is joint work with Robert West, Dan Jurafsky, Jure Leskovec, Christopher Potts.

MPI-SWS research in the news

May 1, 2014

MPI-SWS faculty member Cristian Danescu-Niculescu-Mizil has had his work on how to ask for a favor featured on various media outlets including the Huffington Post, Gizmodo, Lifehacker, Slate's Future Tense blog, ABC News and Süddeutsche Zeitung. Links to all the articles can be found here. This is joint work with Tim Althoff and Dan Jurafsky.

Cristian Danescu-Niculescu-Mizil quoted by ABC News

August 1, 2013

MPI-SWS faculty member Cristian Danescu-Niculescu-Mizil was quoted in a recent ABC News article about social bias effects in social media.

MPI-SWS research in the news

A recent WWW 2012 paper by Krishna Gummadi, Bimal Viswanath, and their coauthors was covered by GigaOM, a popular technology news blog, in an article titled Who's to blame for Twitter spam? Obama, Gaga, and you.

Steven le Blond's work on security flaws in Skype and other peer-to-peer applications has been receiving global media attention: WSJ, Le Monde (French), die Zeit (German),

...

A recent WWW 2012 paper by Krishna Gummadi, Bimal Viswanath, and their coauthors was covered by GigaOM, a popular technology news blog, in an article titled Who's to blame for Twitter spam? Obama, Gaga, and you.

Steven le Blond's work on security flaws in Skype and other peer-to-peer applications has been receiving global media attention: WSJ, Le Monde (French), die Zeit (German), Daily Mail, New Scientist, Slashdot, Wired, and the New Scientist "One Percent" blog.

Read more

MPI-SWS study exposing Facebook privacy leak attracts global media attention

A study by MPI-SWS researchers Saikat Guha (now at Microsoft Research), Bin Cheng, and Paul Francis has been highlighted on CNN, NPR, The Washington Post, Fox News, and other major media outlets.

The study, which will be presented at the ACM Internet Measurement Conference (IMC) in November, looks at the targeting behavior of Google and Facebook. While the goal of the study was to understand targeting in general,

...

A study by MPI-SWS researchers Saikat Guha (now at Microsoft Research), Bin Cheng, and Paul Francis has been highlighted on CNN, NPR, The Washington Post, Fox News, and other major media outlets.

The study, which will be presented at the ACM Internet Measurement Conference (IMC) in November, looks at the targeting behavior of Google and Facebook. While the goal of the study was to understand targeting in general, the researchers discovered that gay Facebook users can unknowingly reveal to advertisers that they are gay simply by clicking on an ad targeted to gay men. The ads appear innocuous in that they make no mention of targeting gay users (for instance, an ad for a nursing degree). A user's sexual orientation can be leaked even if the user made his sexual orientation private using Facebook's privacy settings.

This study was done as part of a broader research project to design techniques for making advertising more private.

Read more