Ethics by Design — Five Ethical Principles in AI

12.8.2024
zasady etyczne w AI

One of the most valuable skills in designing AI solutions is the ability to anticipate the consequences of AI applications and machine learning, including understanding the possible effects of these applications on society. The current negative consequences that we can observe globally in the case of social media, for example, are precisely the result of not thinking about the consequences. It is necessary to adopt a human-centric concept of AI. Human-centered AI must be designed and developed in a way that is consistent with the values and ethical principles of the society or community it affects.

In practice ethics by design, or designing ethical solutions in systems using AI, means “ethics from the beginning”. It is about introducing ethical assumptions into emerging AI systems at the design stage, including shifting moral responsibility for the effects of the actions of machine learning systems onto the designers of those systems.

Ethics by design is a key concept in the European approach to AI, where ethical and legal solutions shape a secure and human-centred algorithmic community. Within the framework of this concept, many programs are being created and many projects are being carried out for the “ethics” of AI. The ethics of the system being created should be one of the mandatory requirements for this system, along with reliability and integrity, in order to prevent the same damage that occurs after the implementation of a solution using AI.

Five ethical principles that underpin Good AI Society, and a number of recommendations for evaluating, developing and supporting “ethical” AI are proposed by L. Floridi. The five pillars of a “good AI society” consist of:

  1. Doing good (benevolence), understood as promoting prosperity, protecting dignity and acting for the planet,
  2. Non-harm (respect for privacy, ensuring security, caution in designing AI solutions),
  3. Preservation of the meta-autonomy of man, that is, leaving control in his hands and the reversibility of processes concerning the autonomy of the machine at each stage of its operation,
  4. Justice (elimination of unfair discrimination, equitable distribution and distribution of goods, promotion of solidarity) and
  5. “Explanability” (explainability) as well as enabling us to understand how AI works (as an answer to the question “how does it work?”) , its accountability, that is, the ability to assign responsibility for specific actions of AI (as an answer to the question “who is responsible for the fact that it works like this?”) , as well as its transparency (as an answer to the question “why does it work that way?”).

Trustworthy and responsible AI, however, is more than unchecking the appropriate checklist boxes or even designing additional functions or buttons in the system to reverse the autonomy of AI. Besides ethics by design (ethics in the design of solutions) is needed ethics for design (ethics for design), understood as standards and certification paths, clearly defined codes of conduct, guaranteeing the integrity of research, design, construction, application and management of systems using AI solutions.

Katarzyna Głąb
Trainer