Robots With Artificial Intelligence: An Ethical Discussion?

Robots with Artificial Intelligence are no longer science fiction characters. They are deeply integrated into the routine of millions of people and thousands of companies and industries from the most diverse branches and sectors.

Who hasn’t talked to a chatbot or been answered by a robot on the phone? Human issues are highlighted when communication and interaction are being drastically altered and re-signified, with robots with artificial intelligence taking on increasingly human faces, personifications, and expressions.

Are there any ethics that control the actions and “thoughts” of these intelligent systems? To what extent does the human factor apply to them? Who should be punished for misconduct?

These questions are based on biased decision-making and some controversies caused by robots or AI systems, such as reproductions of prejudiced speeches.

The criteria used by these intelligent systems did not seem clear enough to people who have encountered these situations. Some companies responsible for the systems tried to get around the issue, but until then, there was no centralization of the theme, internationally speaking.

International Model About Robots With Artificial Intelligence

With this, the EU established seven points to define whether artificial intelligence is trustworthy or not:

  1. Human supervision: All Artificial Intelligence must allow human intervention in its functioning. Rather than diminishing human autonomy, AI needs to guarantee the freedom and equality of human beings through its actions;
  2. Solidity and security: The systems need to be reliable and secure and have the firmness and strength to deal with adversities and errors or inconsistencies during their operation. One of the biggest challenges for developers today is creating algorithms capable of better understanding complex issues and quickly responding to unforeseen and non-standard situations. 
  3. Privacy and data protection: Artificial Intelligence needs to be transparent and provide full access to users’ data, as well as guarantee their privacy and security in situations where personal information can be used to harm or discriminate against them.
  4. Transparency: All actions and decisions of the system need to be clear, readable, and understandable to the public. Even if your programming is complex and technical, your activities need to be translated so that users can fully understand it. 
  5. Welcoming diversity: The systems need to encompass and understand the human being. This includes understanding and accepting diversity in all its aspects. In this way, AI must value inclusion, acceptance, and accessibility for the most diverse audiences and human beings. 
  6. Social Welfare: Artificial Intelligence needs to be a resource and a tool to help human beings and humanity. It is necessary to use it to produce significant and beneficial social changes for the population without neglecting social, ecological, and sustainable responsibilities.
  7. Accountability: In case of errors or malfunctions of the systems, it is necessary to have a way of being accountable to the users. In this way, competent developers can be held responsible for the results and consequences of AI malfunctioning.

Application And Future Of The Model 

The model was created with the help of 52 experts on the subject, including academics, sectoral bodies, and companies such as Google, Bayer, Santander, and SAP. As a model still being tested, the points raised do not have the force of law.

However, they should guide developers of Artificial Intelligence platforms and systems, although there is no punishment or restriction in case of non-adherence to the model. 

The pilot project will be evaluated in 2020 by experts in Artificial Intelligence so that the following steps can be discussed. Each of the points will be analyzed separately and together for a more accurate perception of the feasibility and applicability of the model and identification of improvements and changes. 

Why Is It Important? 

The human routine has already incorporated intelligent systems so profoundly and naturally that it is almost unnoticed when we are dealing with other humans or not. Thus, the procedures need to be revised to adapt the activity of these systems to the accepted standard in human society. 

In the same way that humans need to follow rules of behavior and conduct, so do robots. Since all intelligent systems are created and developed, from the first line of code, by human intelligence, there must be standardization in the line of conduct to be followed by these programmers. Thus, errors are reduced, and scopes are increased.

Cases such as Microsoft’s artificial intelligence that reproduced prejudiced and sexist comments on Twitter should serve as an example. Thus, these errors are not repeated and can be considered in the future.

Also Read: What Are The Advantages Of Robotic Process Automation?

Recent Articles

Related Stories