Ethics as a process of reflection and deliberation

Let me start with two questions. Do you think ethics is important in the development and application of algorithms or artificial intelligence (AI systems? And do you find it easy to integrate ethics in your projects, when you develop or apply algorithms or AI systems?

I have asked these two questions at multiple occasions. Almost all people raise their hands after the first question. Almost all hands go down after the second question. We find ethics important. But we have a hard time integrating ethics in our projects.

There are many reasons why you would want to integrate ethics into your projects. Critically, because technology is never neutral.

Algorithms are based on data, and in the processes of collecting and analysing these data, and turning them into a model, all kinds of choices are made, usually implicitly: which data are collected (and which excluded), which labels are used (based on which assumptions). And all these choices create bias.

If the training data mainly consists of light-colour faces, the algorithm will have trouble with dark-colour faces. The notorious example is of Google placing the tag “gorillas” under a photo of two black teenagers; a problem that (to my knowledge) they still have not fixed properly.

Responsibility

Since technology is not neutral, we – IT professionals, developers, decision-makers, researchers, designers or consultants – have a responsibility. We contribute to putting something into the world that will have effects in the world. Ethics can help to deal with that responsibility; to take responsibility.

Sometimes people do not like the word “ethics”. They envision a judge wagging or pointing their finger. Or they view ethics as a barrier against innovation.

I understand ethics radically differently. Not as a judge or a barrier. Rather, as a steering wheel. If your IT project is a vehicle, then ethics is the steering wheel. Ethics can help you to keep your project in the right lane, avoid going off the road, and take the correct turns, so that you bring your project to the right destination, without collisions.

A process of reflection and deliberation

You can integrate ethics into your projects by organising a process of ethical reflection and deliberation. You can organise a three-step process for that:

  1. Put the issues or risks on the table – things that you are concerned about, things that might go wrong.
  2. Organise conversations to look at those issues or risks from different angles – you can do this in your project team, but also with people from outside your organisation.
  3. Make decisions, preferably in an iterative manner – you take measures, try them out, evaluate outcomes, and adjust accordingly.
Más contenido para leer:  Nokia apunta a la ventaja en el viaje hacia la Industria 4.0

A key benefit of such a process is that you can be accountable; you have looked at issues, discussed them with various people, and have taken measures. Practically, you can organise such a process in a relatively lightweight manner, e.g., a two-hour workshop with your project team. Or you can integrate ethical reflection and deliberation in your project, e.g., as a recurring agenda item in your monthly project meetings, and involve various outside experts on a regular basis.

If you want to work with such a process, you will also need some framework for ethical reflection and deliberation. Below, we will discuss two ethical perspectives that you can use to look at your project: consequentialism and duty ethics.

Consequentialism

Consequentialism is about identifying and assessing pluses and minuses. Imagine that you put this system into the world – what advantages and disadvantages will it bring about, in society, in people’s daily lives? What would be its added value? Or its overall costs? You can compare different design options or alternative solutions with each other. You then choose the option with more or bigger advantages, and with fewer or smaller disadvantages.

This perspective, of assessing plusses and minuses, is often appealing. However, you may encounter complications.

Let us look at self-driving cars. What is, overall, the added value of self-driving cars? What problem do they solve? Are they safer? Can drivers rest while driving? Such questions can help you explore entirely different options, like public transport, which is safer, and where people can rest during transit. As a thought experiment, your project, and its assumptions and starting points, can be up for discussion.

Another question is: Where do you draw the boundaries of the system you analyse? What pluses and minuses do you include? And which do not count? You will probably count the benefits for the owner of the self-driving car. But do you count the costs and risks for cyclists and pedestrians? These are questions about system boundaries.

Más contenido para leer:  Vonage unveils network API availability | Computer Weekly

Now, self-driving cars are often put in the context of a so-called smart city, as parts in a larger network, connected to all sorts of infrastructure, like traffic signs.

That would enable, e.g., ambulances to get priority at intersections. As a variation, you can imagine some premium service that comes with high-end self-driving cars that give them access to exclusive rush-hour lanes.

Do you then also look at the disadvantages for other road users or residents? You can extend the boundaries of your analysis and look at the human and environmental costs that went into producing such cars.

Moreover, there are questions about the distribution of pluses and minuses. How will benefits and costs be distributed between different groups of people – car drivers, cyclists, pedestrians, children? And, if we look at the supply chain, we would need to take into account the costs that come with the extraction of rare minerals and the conditions of workers in other countries.

Duty ethics

Another perspective we can use is duty ethics. It is concerned with duties and rights.

Let us take another example: security cameras. You can imagine a project with a municipality as a client. They have a duty to promote public safety. To fulfil that duty, they hang cameras in the public space. This raises questions about citizens’ rights to privacy.

So, the duties of one party relate to the rights of another party. This municipality then needs to combine two duties: to provide a safe living environment and to respect citizens’ rights to privacy.

People often perceive a conflict here. As if you need to choose between safety or privacy. But you do not have to. You can work with technologies that combine safety and privacy, e.g., through data minimisation or privacy-enhancing technologies. Jaap-Henk Hoepman wrote a book about that: Privacy is hard and seven other myths.

Finally, a rather silly example, to inspire creative solutions. Imagine that you are in the 1970s and you want to go camping. You can choose between a spacious tent that is very heavy, or a lightweight tent that is very small. There is a conflict between volume and weight. That is, until light, waterproof fabrics and strong, flexible poles were invented. Now you can combine a large volume and a small weight. You can look for creative combinations of safety and privacy. Or of security and usability, e.g., in cyber security.

Más contenido para leer:  Docomo, NTT, NEC y Fujitsu presentan un dispositivo 6G de 100 Gbps

In practice

In practice, different ethical perspectives are intertwined, and you want to use them in parallel. You want to analyse pluses and minuses, and take duties and rights into account.

Let us look at one more example: an algorithm to detect fraud. Pros: the algorithm can flag possible cases of fraud and potentially promote efficient and effectiveness fraud detection. Cons: The cost to build and maintain this algorithm. I added maintenance, because you will need to evaluate such an algorithm periodically and adjust it if necessary, to prevent it from derailing.

Other drawbacks: false positive errors; in our case, these would be flags for cases that, upon further investigation, turn out not to be fraud. This can cause enormous harm to people who were wrongly suspected of fraud, as thousands of Dutch parents experienced. In addition, the organisation will need to make enormous efforts to repair these false positive errors.

Moreover, human rights are at play in duty ethics. That was the case with SyRI, the Dutch System for Risk Indication that was banned by the The Hague District Court in 2020. This was a similar algorithm to detect fraud with social services and benefits.

On the one hand, the government has a duty to spend public money carefully and to combat fraud. On the other hand, citizens have a right to respect for private and family life (Article 8 of the European Convention on Human Rights). The judge weighed these against each other and ruled that the use of SyRI violates this human right.

There are more ethical perspectives than these two. In a subsequent installment, we will discuss relational ethics and virtue ethics.

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales