Ethical AI. Should we give AI the Ten Commandments?

We all bandy the word “ethical” everywhere at the moment; an ethical company, ethical music festivals, and now we have ethical AI. But what exactly does that mean?

The word “ethical” as defined by the Cambridge Dictionary, relates to beliefs about what is morally and wrong.

Also by the Collins Dictionary as

“Having to do with ethics or morality; of or conforming to moral standards“.

“Conforming to the standards of conduct of a given profession or group”.

Basically, Ethics is a set of rules to abide by, so that a matter is fair for all. Ethics has been around for thousands of years as expounded by the likes of Aristotle, one the most outstanding philosophers. His writings in ethics and political theory as well as in metaphysics and the philosophy of science continue to be studied, and his work remains a powerful current in contemporary philosophical debate.

Also as Plato defined in his ethics and moral psychology, he developed the view that the good life requires not just a certain kind of knowledge (as Socrates had suggested) but also habituation to healthy emotional responses and therefore harmony between “the three parts of the soul”.

There is even a 3000-year-old Egyptian boy’s book to teach the sons of the nobility ethics. We have many codes of ethics that allow the precedence of the situation to dictate which ethics we use, given different situations. When we talk of ethical AI, what do we mean? Are we talking to the builders of AI or the users? Whose ethics we are talking about, still seems a little unsure.

On October 30, 2023, the White House issued an Executive Order aimed at positioning the United States as a leader in the realm of artificial intelligence (AI) while addressing ethical concerns.

This landmark order introduces a range of measures to ensure AI safety, security, and privacy, emphasizing equity and civil rights protection. It calls for new standards, such as sharing safety test results for AI systems, the development of guidelines for agencies to evaluate privacy-preserving techniques, and a commitment to mitigating discrimination and bias in AI applications. The order also promotes innovation, competition, and international collaboration in AI while encouraging responsible and effective government use of AI. This comprehensive strategy underlines the importance of ethical AI considerations and the need for global cooperation to harness AI’s potential.

Our Summary of key measures for White House’s Executive Order on AI:

1. AI Safety and Security: Mandatory sharing of safety test results, rigorous safety standards, and protection against AI-enabled fraud.

2. Protecting Privacy: Prioritizing privacy-preserving techniques, strengthening privacy guidelines, and evaluating their effectiveness.

3. Advancing Equity and Civil Rights: Preventing algorithmic discrimination and ensuring fairness in the criminal justice system.

4. Supporting Consumers, Patients, and Students: Promoting responsible AI use in healthcare and education.

5. Supporting Workers: Addressing AI-related job displacement and workplace issues.

6. Promoting Innovation and Competition: Fostering AI research, competitive AI ecosystems, and modernized visa criteria.

7. Advancing Leadership Abroad: Collaborating on international AI frameworks.

8. Ensuring Responsible Government Use: Issuing AI use guidance, efficient AI acquisition, and a government-wide AI talent surge.

These measures ensure ethical AI while fostering innovation and international cooperation.

We also talk about Emotional AI as computers can read emotions by analyzing data, including facial expressions, gestures, tone of voice, force of keystrokes, and more. Any AI process used to determine a person’s emotional state and then react to it can be called Artificial Emotional Intelligence. This ability will allow humans and machines to interact in a much more natural way, very similar to how human-to-human interaction works.

In 2023 emotional AI is also becoming common in schools. In Hong Kong, some secondary schools already use an artificial intelligence program, developed by Find Solutions AI, which measures micro-movements of muscles on the students’ faces and identifies a range of negative and positive emotions. Teachers are using this system to track emotional changes in students, as well as their motivation and focus, enabling them to make early interventions if a pupil is losing interest. 

The problem is that the majority of emotional AI is based on flawed science. With emotional AI algorithms, while, for instance, the algorithms can recognize and report that a person is crying, it is not always possible to accurately deduce the reason and meaning behind the tears. As such, AI technologies that make assumptions about emotional states may even exacerbate gender and racial inequalities in our society because of the bias built into it already. For example, a 2019 UNESCO report showed the harmful impact of the gendering of AI technologies, with “feminine” voice-assistant systems designed according to stereotypes of emotional passiveness and servitude. 

Emotional AI technologies will become more pervasive by the end of 2023, but if left unchallenged and unexamined, they may act based on systemic racial and gender biases, reinforcing these biases, replicating and strengthening the inequalities of the world, and further disadvantaging those who are already marginalized. 

The latest research on emotional AI is very promising for disabled people, using AI in brain mapping, and for the tech industries to get better at miniaturizing the components. This leads us towards the emotional recognition of people with Visual Disabilities through Brain-Computer Interfaces.

Examples like this application seem very ethical in enabling communication for those who have a disability, but will it enable AI to feel emotions? Our answer is No, AI will recognize if someone is feeling anxiety or anger, however, it will not be able to predict how the person reacts to these feelings, as many elements of how people deal with emotions come into play. It all varies greatly with factors like the nurturing environment people grew up in, or what religion influenced the way they react. At the end of the day would we want AI to empathize with us over traumatic events? Would we want it to say it is sorry, when, we are practically sure it does not mean it?

Coming to the article’s question as to whether or not it is good to give the 10 commandments, for good or bad, we must never forget that it is as simple as switching a digit for AI to go rogue. So, if we are to have AI in any guise, particularly a Chabot, we argue that the interaction will mostly ease our lives and keep us from harm. Under this view, the 10 commandments are not such a bad idea, but they are not enough! As standardized ethics does not exist, Ethical AI can only be as ethical as its programmer; who, as we see, can be any person, company, or Government applying specific types of ethics. As long as the ethical principles the programmers employ and the reasons behind them are completely transparent, we should gain the right insight into their ethical utilization of AI. Shouldn’t there be a prerequisite license for embarking on the journey of AI coding and developing ethics?