Alternative perspectives: relational and virtue ethics in tech

If you are involved in collecting and analysing data, or developing or applying algorithms and artificial intelligence (AI) applications, you probably want to do so responsibly.

You can turn to documents that list and discuss ethical principles, such as preventing harms, human dignity, human autonomy, fairness, equality, transparency and explicability. Great principles – but they can remain rather abstract. Possibly, you are looking for practical methods to integrate ethics into your projects.

In a previous installment, I presented ethics as a steering wheel, and ethics as a process. You can use ethics as a steering wheel for your project: to stay in the right lane, take the correct turns, and avoid collisions. And you can organise a process of ethical reflection and deliberation: you put possible issues on the table, organise conversations about these issues, and make decisions based on those conversations.

I also discussed two ethical perspectives as frameworks. With consequentialism, you can assess the potential pluses and minuses of the results (consequences) of your project. You can work to maximise the pluses and minimise the minuses, or choose options with more or larger pluses over options with fewer or smaller minuses.

With duty ethics, you can focus on the various duties and rights that are at play in your project. For example, on the one hand, a city with a duty to promote safety that therefore installs cameras in public places, and, on the other hand, citizens with rights to privacy. Your challenge is then to combine such duties and rights.

European Enlightenment

These two perspectives were developed during the European Enlightenment: consequentialism by Jeremy Bentham (utilitarianism) and duty ethics by Immanuel Kant (Kantianism).

Thus, key assumptions and ambitions of the Enlightenment were embedded in these perspectives. They looked at people as separate individuals, independent of others, and their outlook on the world and on people was one of objective and calculating.

Relational ethics and virtue ethics are very useful indeed for the development and application of algorithms and AI systems

This has become our default, “normal” outlook. But it is only one possible way of looking at the world and at other people, and certainly not the only way.

Below, I will discuss two other perspectives: relational ethics and virtue ethics. The emergence of relational ethics (as ethics of care, in the 1980s) and the revival of virtue ethics (since the 1970s, as in professional ethics) can be understood as a reaction or addition to consequentialism and duty ethics.

Moreover, I’d like to propose that relational ethics and virtue ethics are very useful indeed for the development and application of algorithms and AI systems.

Relational ethics can help to understand how technologies affect interactions between people; how people treat each other (differently) through technology. Virtue ethics can help to understand how technologies can help – or hinder – people to cultivate specific virtues, such as justice, courage, self-control, or honesty.

Relational ethics

By way of example, let us use a relational ethics perspective to look at augmented reality (AR) glasses.

You can think back to Google Glass, introduced in 2013 and out of production since March, or of the recently unveiled Apple Vision Pro, or a future more lightweight version of it. They offer the wearer a combination of a view of the real world with projections of virtual worlds.

Now, suppose that we are outside, on the street, and I wear such glasses and look in your direction. You will wonder whether I am filming you, and you will probably not like that. Most people would disapprove of me wearing such glasses, certainly in the vicinity of a children’s playground. Or suppose we are talking to each other. You will want to know whether I am paying attention to you, or looking at something else, like we have with smartphones.

Wearing AR glasses can make me look at people as objects, and less as people: “Nice looking; I take a picture” or “Boring person; I’d rather watch a movie”. Dystopian future? Farfetched? Possibly. But we did have the Glasshole experience, 10 years ago.

A relational ethics perspective typically includes an analysis of power: how is power distributed and how does power shift through the use of technology? The photos or films that you make with your AR glasses probably go into a cloud of Google, Meta, Apple or Amazon. And because you clicked “OK”, that company can use your photos and films for lots of purposes, such as to train their AI systems.

Subsequently, they can use these AI systems to personalise ads and sponsored content and project those into your eyes. These companies exercise power over users. Of course, they already do that via smartphones. But AR glasses will probably be even more intrusive, especially if you wear them all day, which will probably require they first become less bulky.

We can also look at possible positive effects. Through AR, for example, we could receive support to get rid of fear, learn about people in other cultures, or collaborate in professional contexts. AR will probably bring both desirable and undesirable effects. A relational ethics perspective can help to develop and apply technologies in such ways that people can treat each other humanly, not as objects. Moreover, it can help take a critical look at business models and the distribution of power.

Virtue ethics

Lastly, virtue ethics. From a western perspective, this tradition starts with Aristotle in Athens. Other cultures, such as Buddhism and Confucianism, also have virtue ethics.

First, we need to get a potential misunderstanding out of the way. Some people associate virtue ethics with mediocrity and with individual behaviour. Both are incorrect. Virtue ethics is concerned with excellence, with finding an excellent “mean” in each specific situation.

If you see somebody beating up another person, and if you are physically strong, it would be courageous to intervene. You would act cowardly if you stayed out of it. If, however, you are not physically strong, it would be courageous to keep away from them and call the police. It would be rash to interfere.

Courage, then, is the appropriate “mean” between cowardice and rashness, and depends on the person and the situation.

Moreover, virtue ethics is not about individual behaviour. It is concerned with organising a society in which people can live well together.

Virtue ethics offers a framework to explore how emerging technologies can help people cultivate relevant “technomoral” virtues

Shannon Vallor has given virtue ethics a wonderful update in her book Technology and the virtues. She proposes to turn to virtue ethics if we want to discuss and shape “emerging technologies”, where pluses and minuses, and duties and rights, are not yet clear. Virtue ethics then offers a framework to explore how such technologies can help people cultivate relevant “technomoral” virtues. 

Let us look at a social media app through the perspective of virtue ethics. Usually, such an app nudges people to use the app often and for long periods, with notifications, colours and beeps, and automatic previews of related content. This undermines people’s self-control. It prevents people from cultivating the virtue of self-control. Although your plan is to just check your email, you end up spending 30 minutes or more on Facebook or YouTube.

Many social media apps also corrode honesty. They are designed to promote so-called engagement. They present half-truths and fake news, and promote rage and polarisation. Suppose you work on such an app. Can you do something differently? Can you develop an app that helps people cultivate self-control and honesty? Maybe – if you also change the underlying business model.

For example, you can develop an app that people pay for and that asks: What do you want to achieve and how many minutes do you want to spend? After the set number of minutes, you get a notification: Done? Maybe do something else now?

And for honesty: Are you sure you want to share this? Are you sure it is truthful? Or a reminder like this: Your message contains strong language. Maybe take a deep breath and exhale slowly. Now, how do you want to proceed?

Get started with virtues

Virtue ethics is a very suitable perspective for professionals. What virtues do you, as a professional, need in your work, and in your projects?

Justice, if you are working on an algorithm and want to prevent the propagation of bias and discrimination. Courage, if you want to take the floor and express your concerns about unwanted side effects of the project.

The beauty of virtue ethics is that you can start right away and get better with practice. You can choose a virtue to develop: justice, courage, self-control, curiosity, creativity, diversity. Then select opportunities to act differently from how you normally would: you voice your concern about fairness, you ask an open question, you tolerate a feeling of uncertainty, you invite that other colleague for the meeting. In addition, you can look at people whom you admire for their virtues, and learn from them, possibly model their behaviours.


Marc Steen’s book, Ethics for people who work in tech, is out now via Routledge.

Exit mobile version