This project deals with questions of algorithmic fairness and transparency in relation to the use of data gathered through social machines. A central aim of the project is to understand how people make decisions in relation to their personal privacy, as well as how companies and institutions can make data-driven decisions without compromising individual users' privacy. One of the empirical insights yielded by our work is that privacy-related decisions in the real world are often highly influenced by users' perceptions of and prior experience with the companies that collect their personal information.
This sometimes causes users to choose less secure options in terms of privacy than would have otherwise been expected. The project has made important contributions to the privacy literature, including a philosophical analysis of algorithmic transparency in terms of the political notion of public reason.