Peer-Reviewed Abstracts

  • AI & Society, 2025 (link)

    AI algorithms require human input to achieve technological aims. This fact is often overlooked in discussions of autonomous systems and AI safety, to the detriment of both philosophical discourse and practical progress. One potential remedy is to ground our theorizing more fundamentally in the idea that AI technologies are sociotechnological systems with human and artifactual components. In this article, I pursue this strategy, aiming to shift the focus in AI ethics from artifacts and their intrinsic properties—what I refer to as the robotic conception of AI—to the relationships among elements embedded in AI-involving sociotechnological systems. First, I defend the claim that the sociotechnological-system perspective provides an accurate description of some of our most advanced AI. Second, I argue that the dominance of the robotic conception has steered AI safety research down unproductive paths, while the sociotechnological perspective has the capacity to set us right. Specifically, the robotic conception encourages the development of artificial moral agents—whose creation we should avoid if possible—and distracts researchers with hypothetical trolley cases. In contrast, the sociotechnological approach coheres with actual progress being made on AI safety (e.g., networking, shared user-artifact control, and value alignment) and makes vivid solutions to the safety problem that do not require the creation of humanlike moral decision-makers. (AI & Society, 2025)

  • AI & Ethics, 2025 (link)

    Autonomous AI agents are increasingly required to operate in contexts where human welfare is at stake, raising the imperative for them to act in ways that are morally optimal—or at least morally permissible. The value alignment research program seeks to create “beneficial AI” by aligning AI behavior with human values (Russell in Human compatible: artificial intelligence and the problem of control, Penguin, London, 2019). In this article, we propose a method for specifying permissible outcomes for AI agents that targets ideal values via moral expertise as embodied in the collective judgments of philosophical ethicists. We defend the notion that ethicists are moral experts against several objections found in the recent literature and argue that their aggregated judgments offer the epistemically best available proxy for moral truth. We recommend a systematic study of ethicists’ judgments—using tools from social psychology and social choice theory—to guide AI agents' behavior in morally complex situations.

  • Journal of Military Ethics, 2022 (link)

    Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare. (Journal of Military Ethics, 2025)