The Ethics of Artificial Intelligence — Where Does the Morality Lie?

The morality of AI, or whose morality?
It is no longer extraordinary that artificial intelligence, increasingly used in spheres hitherto belonging to man, is increasingly autonomous in dealing with the complexity of human problems. By processing huge data sets in a short time, solving complex tasks that the human brain is not able to cope with, it represents the hope of societies. Programming and turning on the machine causes the electrons to run until an answer is found.
Of course, AI is not only the benefits themselves and is not a way to unravel all the problems of humanity, and the discussion of the advantages and risks of the risks associated with AI is constantly going on. Alan Turing, who first defined AI and noted its capabilities, also pointed out the potential risks associated with the use of AI. This risk concerns not only technological aspects, but also legal, economic, cultural and ethical aspects related to the application of specific solutions. Along with the development of AI and its widespread use, questions of AI autonomy and morality are increasingly coming to the fore, which until now seemed appropriate only to the human world (and rather concerning who and how uses AI). It is obvious that the expanding autonomy of AI, until recently only appearing in a science fiction novel or movie, raises concerns.
AI technology in the private and public sectors
The use of technology with the use of artificial intelligence solutions in both the private and public sectors has forced attempts to regulate the issues related to it in legal regulations. Any attempt to define AI is like the pursuit of an ever-shifting horizon, as technologies using artificial intelligence are constantly being improved in a continuous and incremental way. Thus, the attempt to construct a legal definition of artificial intelligence presents difficulties. We understand AI broadly as systems that exhibit intelligent behavior by analyzing the environment and taking autonomous actions to achieve specific goals (including through algorithmic machine learning). This approach tends to conceive AI as an activity aimed at making machines intelligent, and intelligence itself as that quality that allows the subject to function properly and with his future in mind in the world around him.
Can algorithmic decision-making by neural networks achieve a level of moral autonomy similar to that which humans have when making decisions? If we can talk about the morality of AI, then what is the source of origin of this morality? In fact, it is a question of how the dynamics of the relationship of morality, power in the processes of designing, implementing and exploiting AI take place.
Artificial intelligence, regardless of attempts to give it subjectivity and characteristics inherent only to humans, remains a tool in the hands of man. We have entered an era where machines will be able to be entrusted with decisions about life or death, so algorithmic morality is something that should be thoroughly analyzed. An intelligent robot can begin to act according to its own moral code, a code that may be unrecognizable to humans. Machine ethics, in contrast to computer ethics, which considers the ethical use of machines by humans, focuses on whether the behavior of machines towards human users, and perhaps other machines, is ethically acceptable. It is about creating an ethical intelligent agent (IA) that itself is guided by an ideal ethical principle or set of rules, capable of finding and calculating the best course of action in the case of ethical dilemmas using ethical principles. But who should teach the machine morality if the traditionally understood ethics in the case of machines is not applicable?
Human cognition does not necessarily coincide with human moral reasoning, and knowledge of moral reasoning does not necessarily represent the moral specificity of such reasoning, nor is every practical reasoning a moral reasoning in itself. Analogical thinking, which justifies the idea of replicating the morality of man and machine, encounters barriers related to the lack of adequate analogies between humans and machines. This is especially true of the category of moral feelings and the complexity of the motivation for moral action. There is therefore no guarantee that an ethically perfect Intelligent Agent can be constructed so that its moral perfection is unlimited through a process of continuous perfect moral self-updating.
Is it just a matter of fear of AI?
Recall the “first rule” of Melvin Kranzberg, which says that technology is neither good nor bad; nor is it neutral. On the one hand, one of the most important challenges in AI security research is maintaining control over a potentially superintelligent AI capable of transforming the environment into a place uninhabitable by biological life forms. On the other hand, a properly constructed AI can ensure a future in which everyone's life is dramatically improved. It is therefore not a question of potential autonomy that is the problem, but the insufficient intelligence of poorly designed, “dumb” machines, posing a risk to human and non-human users who interact with them. Therefore, it is the man who constructs or operates a system using AI to blame for this risk. It is therefore up to him to build trustworthy AI systems.