News 2018

Social & Information Systems

Krishna Gummadi and Alan Mislove awarded a Facebook "Secure the Internet" grant

October 2018
MPI-SWS faculty member Krishna Gummadi and MPI-SWS alumnus Alan Mislove have been awarded a "Secure the Internet" grant by Facebook. Their proposal, “Towards privacy-protecting aggregate statistics in PII-based targeted advertising,” has been awarded $60,000 to develop techniques for revealing advertising statistics that provide hard guarantees of user privacy, based on a (principles-first) approach. Their goal is to develop a differential privacy-like approach that can be applied to existing advertising systems.

The Facebook "Secure the Internet" grant program is designed to improve the security, ...
MPI-SWS faculty member Krishna Gummadi and MPI-SWS alumnus Alan Mislove have been awarded a "Secure the Internet" grant by Facebook. Their proposal, “Towards privacy-protecting aggregate statistics in PII-based targeted advertising,” has been awarded $60,000 to develop techniques for revealing advertising statistics that provide hard guarantees of user privacy, based on a (principles-first) approach. Their goal is to develop a differential privacy-like approach that can be applied to existing advertising systems.

The Facebook "Secure the Internet" grant program is designed to improve the security, privacy, and safety of internet users. Gummadi and Mislove's proposal was one of only 10 winning proposals, which were together awarded more than $800,000 by Facebook.
Read more

Five MPI-SWS papers accepted at NIPS 2018

October 2018
The following five MPI-SWS papers have been accepted to NIPS 2018, the flagship conference in machine learning:

  • Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

  • Teaching Inverse Reinforcement Learners via Features and Demonstrations

  • Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners

  • Deep Reinforcement Learning of Marked Temporal Point Processes

  • Enhancing the Accuracy and Fairness of Human Decision Making


 

Research Spotlight: Learning to interact with learning agents

Many real-world systems involve repeatedly making decisions under uncertainty—for instance, choosing one of the several products to recommend to a user in an online recommendation service, or dynamically allocating resources among available stock options in a financial market. Machine learning (ML) algorithms driving these systems typically operate under the assumption that they are interacting with static components, e.g., users' preferences are fixed, trading tools providing stock recommendations are static, and data distributions are stationary. This assumption is often violated in modern systems, ...
Many real-world systems involve repeatedly making decisions under uncertainty—for instance, choosing one of the several products to recommend to a user in an online recommendation service, or dynamically allocating resources among available stock options in a financial market. Machine learning (ML) algorithms driving these systems typically operate under the assumption that they are interacting with static components, e.g., users' preferences are fixed, trading tools providing stock recommendations are static, and data distributions are stationary. This assumption is often violated in modern systems, as these algorithms are increasingly interacting with and seeking information from learning agents including people, robots, and adaptive adversaries. Consequently, many well-studied ML frameworks and algorithmic techniques fail to provide desirable theoretical guarantees—for instance, algorithms might converge to a sub-optimal solution or fail arbitrarily bad in these settings.

Researchers at the Machine Teaching Group, MPI-SWS are designing novel ML algorithms that have to interact with agents that are adaptive or learning over time, especially in situations when the algorithm's decisions directly affect the state dynamics of these agents. In recent work [1], they have studied the above-mentioned problem in the context of two fundamental machine learning frameworks: (i) online learning using experts' advice and (ii) active learning using labeling oracles. In particular, they consider a setting where experts/oracles themselves are learning agents. For instance, active learning algorithms typically query labels from an oracle, e.g., a (possibly noisy) domain expert; however, in emerging crowd-powered systems, these experts are getting replaced by inexpert participants who could themselves be learning over time (e.g., volunteers in citizen science projects). They have shown that when these experts/oracles themselves are learning agents, well-studied algorithms (like the EXP3 algorithm) fail to converge to the optimal solution and can have arbitrarily bad performance for this new problem setting. Furthermore, they provide an impossibility result showing that without sharing any information across experts, it is impossible to achieve convergence guarantees. This calls for developing novel algorithms with practical ways of coordination between the central algorithm and learning agents to achieve desired guarantees.

Currently, researchers at the Machine Teaching Group are studying these challenges in the context of designing next-generation human-AI collaborative systems. As a concrete application setting, consider a car driving scenario where the goal is to develop an assistive AI agent to drive the car in an auto-pilot mode, but giving control back to the human driver in safety-critical situations. They study this setting by casting it as a multi-agent reinforcement learning problem. When the human agent has a stationary policy (i.e., the actions take by the human driver in different states/scenarios are fixed), it is trivial to learn an optimal policy for the AI agent that maximizes the overall performance of this collaborative system. However, in real-life settings where a human driver would adapt their behavior in response to the presence of an auto-pilot mode, they show that the problem of learning an optimal policy for the AI agent becomes computationally intractable. This work is one of the recent additions to an expanding set of results and algorithmic techniques developed by MPI-SWS researchers in the nascent area of Machine Teaching [2, 3].

References


[1] Adish Singla, Hamed Hassani, and Andreas Krause. Learning to Interact with Learning Agents. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI'18), 2018.

[2] Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N. Rafferty. An Overview of Machine Teaching. arXiv 1801.05927, 2018.

[3] Maya Cakmak, Anna N. Rafferty, Adish Singla, Xiaojin Zhu, and Sandra Zilles. Workshop on Teaching Machines, Robots, and Humans. NIPS 2017.
Read more

Krishna Gummadi awarded ERC Advanced Grant

September 2018
Krishna Gummadi, head of the MPI-SWS Networked Systems group, has been awarded an ERC Advanced Grant. Over the next five years, his project "Foundations of Fair Social Computing" will receive 2.49 million euros, which will allow the group to develop the foundations for fair social computing in the future.

In the most recent round for Advanced Grants, a total of 2,167 research proposals were submitted to the ERC out of which merely 12% were selected for funding. ...
Krishna Gummadi, head of the MPI-SWS Networked Systems group, has been awarded an ERC Advanced Grant. Over the next five years, his project "Foundations of Fair Social Computing" will receive 2.49 million euros, which will allow the group to develop the foundations for fair social computing in the future.

In the most recent round for Advanced Grants, a total of 2,167 research proposals were submitted to the ERC out of which merely 12% were selected for funding. The sole selection criterion is scientific excellence.

Summary of the Fair Social Computing project proposal


Social computing represents a societal-scale symbiosis of humans and computational systems, where humans interact via and with computers, actively providing inputs to influence---and in turn being influenced by---the outputs of the computations. Social computations impact all aspects of our social lives, from what news we get to see and who we meet to what goods and services are offered at what price and how our creditworthiness and welfare benefits are assessed. Given the pervasiveness and impact of social computations, it is imperative that social computations be fair, i.e., perceived as just by the participants subject to the computation. The case for fair computations in democratic societies is self-evident: when computations are deemed unjust, their outcomes will be rejected and they will eventually lose their participants.

Recently, however, several concerns have been raised about the unfairness of social computations pervading our lives, including

  1. the existence of implicit biases in online search and recommendations,

  2. the potential for discrimination in machine learning based predictive analytics, and

  3. a lack of transparency in algorithmic decision making, with systems providing little to no information about which sensitive user data they use or how they use them.


Given these concerns, we need reliable ways to assess and ensure the fairness of social computations. However, it is currently not clear how to determine whether a social computation is fair, how we can compare the fairness of two alternative computations, how to adjust a computational method to make it more fair, or how to construct a fair method by design. This project will tackle these challenges in turn. We propose a set of comprehensive fairness principles, and will show how to apply them to social computations. In particular, we will operationalize fairness, so that it can be measured from empirical observations. We will show how to characterize which fairness criteria are satisfied by a deployed computational system. Finally, we will show how to synthesize non-discriminatory computations, i.e., how to learn an algorithm from training data that satisfies a given fairness principle.
Read more

Four MPI-SWS papers accepted at AAAI 2018

February 2018
Four papers from MPI-SWS have been accepted to AAAI 2018:
  • Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
  • Learning to Interact with Learning Agents
  • Information Gathering with Peers: Submodular Optimization with Peer-Prediction Constraints
  • Learning User Preferences to Incentivize Exploration in the Sharing Economy

Three MPI-SWS papers accepted at WWW 2018

February 2018
Three papers from MPI-SWS have been accepted to the 2018 Web Conference:

  • Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

  • On the Causal Effect of Badges

  • Fake News Detection in Social Networks via Crowd Signals