Last fall I saw a flyer for a class on computational ethics. Upon inspection it appears the class was poorly named; unlike computational geometry, computational finance, computational social-science, or almost any other occurrence of "computational $subject" this class isn't about the use of computational tools to study ethics; instead it's the study of ethical issues surrounding computation: privacy, censorship, cybercrime, AI, and the rest. A better name would be "ethics for programmers" although that sounds rather tedious. Perhaps "ethics of Facebook" would be enroll-baity enough? While not necessarily out-of-character, I didn't just want to complain about a misleading course name. Instead, I want to lament a missed opportunity: what if computational ethics actually was about computation?

The problem with philosophers is that they don't collect data. How do we generally go about acquiring knowledge about the world? Well, to obscenely simplify, we started off by just making guesses: the sun is a chariot driven across the sky by gods. Then we start getting some data and making better theories: the sun rotates around the earth. Then we start getting even better data and fitting our theories to that data: the earth resolves around the sun because that model fits better. Eventually you get to modern science where it's all about collecting either massive amounts of data or conducting more or more intricate experiments. The key thing to note is that we progressively make use of more and better data.

Well, why don't we apply this to ethics. If this data-driven process is so effective in other domains of knowledge, why should ethical knowledge be any different?

So let's build an ethical classifier: we throw in a bunch of ethical scenarios and labels (ethical/unethical) and fit a model. When we encounter another scenario we can use our model to predict whether or not it's ethical. Feature extraction might be a problem, but philosophers have already proposed a wide array of potentially relevant ethical considerations. Since a successful model would result in technological unemployment for ethicists, we could cheaply hire whole ethics departments to do feature extraction.

There are other issues though, firstly, we need to address the concern that a model might not exist; why should we expect there to be a correct theory of ethics in the first place?. This is a deep philosophical problem which I will completely ignore (like most deep philosophical problems). Instead we'll just assume moral realism. This is the view that moral statements are objectively true or false independent of whether our own beliefs about whether it's true or not. Technically we're also fine for culturally dependent ethics too, we'd just need to fit our model for each culture separately. Taking the idea of culture to an extreme, we could even fit on a single person's ethical intuitions.

Secondly, it seems like too hard of a problem. The true model for ethics might be unimaginably complex; after all, it has to account for every possible ethical situation. I admit that could be a tricky problem, but it seems like a common view that ethical actions inhabit a low-dimensional manifold in a potentially infinite dimensional space. The most famous ethical systems of all time are low-dimensional. Utilitarianism relies on the single variable of utility. Virtue ethics relies on the impact on a few virtues. Religions have a bounded number of rules. Research in psychology suggests that there are about a half-dozen dimensions in moral reasoning. We're in fairly good company if we make the assumption that a simple model exists. Even if we're wrong, we're still in decent shape since in the high-dimensional case it seems like data is the only approach that has a chance (the 109 commandments seems like a non-starter).

Finally, some might object is that it's impossible to get the labels given that disagreements about ethics are relatively common. This is not as much of an issue as it first seems; we can use the settled cases to build the theory and then try to generalize to other more controversial cases. A stronger objection in this vein might be that we can't be sure that even unanimous agreement implies that our ethical judgments are correct. Let's call this the slavery problem: let's suppose that if you go back far enough you'll find that slavery was a commonly accepted practice. If we fit our ethical classifier to that training data we'd end up classifying slavery as ethical. I think this might actually be a feature rather than a flaw. Consider how you would go about arguing slavery is wrong; you'd point to similar cases or principles which everyone agrees upon and generalize that reasoning to slavery. A good ethical classifier could suggest instances where this approach might be particularly effective: you just look for the instances closest to your decision boundary and examine them more carefully. So even if our classifier predicts that slavery is morally permissible, it would presumably be close to the boundary which could trigger further reflection. That prompting for further reflection is part of the beauty of data; perhaps the reason why the model is wrong is because there's a morally relevant feature which hadn't been added to the model. It might take years for relevant ethicists to realize that this feature might have consequences for other situations. Our model could incorporate it and improve ethical judgments in the time it takes to retrain the model.

It seems clear that ethics is in need of an overhaul and data is the solution. An ethical classifier is intellectually interesting, but even more so commercially interesting. Consider an app which tells you whether your intended action would be ethical. In case it wasn't obvious yet, this is my April Fool's day post. Although, after writing this up, if someone wants do the legwork of getting the data… If you enjoyed this you might also like Scott Aaronson's post on eigenmorality. I'll leave you with the obligatory SMBC.