Kot anyone can open a newspaper today without reading about artificial intelligence (AI). In fact, hardly anyone can even open a newspaper without reading several stories about magical new things AI can do. However, there is only one aspect of AI that is magical, and that is its autonomy: we give machines the ability to act reasonably independently in our world. And these actions can affect us humans. Not surprisingly, then, autonomy brings with it a whole range of ethical challenges.
Never before have machines been able to make decisions independently of their human masters. Until now, machines only did what we told them to do. In a sense, machines have always been our servants in the past. But soon we will have machines that make many of their own decisions. In a sense, anyone who owns a Tesla car already owns such a machine. Autonomy raises some very difficult new questions. Who is accountable for the actions of an autonomous AI? How should autonomous AI be constrained? What happens when an autonomous AI intentionally or accidentally harms or kills a human?