The 2025 Bovay Workshop

The Bovay Workshop Series at Texas A&M University aims to develop a community of scholars and practitioners interested in applied ethics, especially on issues that affect engineering ethics. With funds provided by Martin Peterson and the Bovay Foundation, I organized the 2025 Bovay Engineering and Applied Ethics Workshop on AI Value Alignment.

The 2025 workshop brought together researchers from philosophy, robotics, and computer science—from both industry and academia—to investigate the role of ethics in the AI safety program known as “value alignment.” Value alignment seeks to create beneficial AI by aligning AI behavior with the values of humanity (or some subset thereof). The behavior of a system aligns with our values just in case it both reflects and is endorsable by them.

Notable contributions included Arianna Manzini (Google DeepMind) on the ethics of advanced AI assistants, Pamela Robinson (UBC Okanagan) on uncertainty-sensitive oughts, and Michael Anderson (University of Hartford) on using AI as an ethical collaborator.