Annette Zimmermann

Postdoctoral Research Associate in Values and Public Policy


  • Law and Justice
  • Political philosophy
  • Science and Technology


I am a political philosopher working on the ethics of algorithmic decision-making, machine learning, and artificial intelligence. I have additional research interests in moral philosophy (the ethics of risk and uncertainty) and legal philosophy (the philosophy of punishment), as well as the philosophy of science (models, explanation, abstraction).

In the context of my current research project "The Algorithmic Is Political", I am focusing on the ways in which disproportionate distributions of risk and uncertainty associated with the use of emerging technologies—such as algorithmic bias and opacity—impact democratic values like equality and justice.

At Princeton, I am based at the Center for Human Values and at the Center for Information Technology Policy. I hold a DPhil (PhD) and MPhil from the University of Oxford (Nuffield College and St Cross College), as well as a BA from the Freie Universität Berlin. I have held visiting positions at the Australian National University, Yale University, and SciencesPo Paris.

Please note: In an effort to reduce my carbon footprint in 2020, I have pledged to prioritize Skype/Zoom (whenever possible) over air travel when attending workshops and conferences. If you would like to invite me to give a talk at your event, I would be most grateful if you could specify whether video conferencing is an option, and if you would consider encouraging other speakers to explore this option as well. I am happy to commit to attending, and actively participating in, all other workshop sessions in addition to my own via Skype/Zoom as well.


"Criminal Disenfranchisement and the Concept of Political Wrongdoing".
Philosophy & Public Affairs 47, no. 4 (2019): 378-411.
Link to published version.
Download accepted version.

"Technology Can't Fix Algorithmic Injustice".
With Elena Di Rosa and Sonny "Hochan" Kim. Boston Review. January 9, 2020.
Link to published version.

This article is currently being featured on a number of philosophy syllabi; if you are using this text in your teaching, please let me and my co-authors know.

"Time, Personhood, and Rights".
Invited. Jurisprudence.

"Review: Candice Delmas, A Duty to Resist: When Disobedience Should be Uncivil".
Invited. Journal of Moral Philosophy.

"The Historical Rawls".
Co-editor with Teresa Bejan and Sophie Smith. Special issue of Modern Intellectual History (forthcoming).

"Economic Participation Rights and the All-Affected Principle".
Global Justice: Theory Practice Rhetoric 10, no. 2 (2017): 1-21.
Link to published version.
Download published version.

Book project

"Just Risk".
Imposing risks of harm on people may seem morally innocuous in some cases: after all, one might argue, risks are just risks—not real, tangible harms. According to this view, whether risks justify a moral and political complaint or not would depend on whether they actually eventuate in harm. So should we wait and see? In other words, can we evaluate the moral and political significance of risk distributions only ex post, after they have materialized? I argue against this view. Instead, mere risks can be a justice problem in democratic societies. Therefore, evaluating risk is a morally urgent task which warrants taking an ex ante perspective. On this view, exposure to specific morally weighty risks can justify the conferral of far-reaching rights and obligations, irrespective of whether such risks actually materialize in harm down the line. This account has complex implications for how we ought to think about distributing the right to participate in democratic decisions between different persons and groups, about how we ought to punish wrongdoers, and about whether it is permissible for the democratic state and its members to make probabilistic, generalized predictive judgments about citizens.

Working papers

A paper on justice and democratic rights.
R & R at the Journal of Ethics and Social Philosophy.

A paper on risk.
Under review. Abstract: available upon request.

A paper on algorithmic injustice.
Under review. Abstract: available upon request.

Another paper on algorithmic injustice.
Abstract: available upon request.

"Two Pictures of Algorithmic Fairness".
Abstract: available upon request.

"Pure Risks and Democratic Wrongs"
Suppose that I impose a risk of harm on you, and the risk of harm never eventuates. Is it possible that I have wronged you, even though I have not harmed you? A range of recent philosophical accounts attempt to explain why cases of this kind—'pure risk’ cases—can involve moral wrongdoing: pure risk impositions may, for instance, wrongfully infringe on a person’s autonomy. While such accounts are not implausible, they have limited explanatory power because they are not comprehensive enough: none of them are able to accurately capture a heterogeneous cluster of morally wrongful pure risk cases involving what I call ‘democratic wrongs’. As I show in this paper, democratic states wrong agents by imposing pure risks on them just if states are at the same time imposing pure risks on other agents in a way that leads to an all-things-considered unfair risk distribution. A risk distribution of this kind violates more general, and not necessarily risk-based, principles of distributive justice which shape the duties of democratic states towards their citizens.

"What, If Anything, Is Wrong With Automation?"
Is automation objectionable only if, and because, it may lead to inaccurate, unfair, and insufficiently explainable outcomes? Or are there any types of decisions that we simply ought not to automate, irrespective of the accuracy, fairness, and explainability of their outcomes? In this paper, I examine a range of possible cases in which automation may be intrinsically objectionable. I conclude that automation is morally wrong just if automation itself constitutes a significant communicative wrong against some or all of those who are subject to automated decisions. I explore three dimensions of these types of communicative wrongs: (i) cases in which automation expresses morally objectionable negligence; (ii) cases in which automation expresses disrespect, and (iii) cases in which automation expresses an unwillingness to be accountable for perpetrating, or for being complicit in, other types of wrongful acts.

Teaching Materials

I am the lead instructor for this year's Ethics of Artificial Intelligence PhD seminar (Princeton Graduate School Professional Development Cohort). You can read my reflections on teaching this seminar here, and you can read more about how the course is designed and what its aims are here and here.