What ethical boundaries should we consider and what should we know about AI? -By Dino J

23 May 2024

Over the last few years, AI has built power in its knowledge , and is deemed by many to be a “superintelligence”, as said by Nick Bostrom’s paper “Ethical Issues in Advanced Artificial Intelligence”, “is an intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills”. But what is AI and what ethical boundaries and implications should we keep in mind to make sure that we do not depend on it entirely? 

 What is Artificial intelligence? 

In Basic terms, AI seeks to do tasks that a human can do to the level of a human or better. In complex terms, as in the definition from Oxford Languages AI is “the theory and development of computer system able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Though we may not realize, simple AI algorithms are all around us, in our phones when we text using autocorrect to autofill in Word, all use simple algorithms of supervised AI to achieve and predict the words you type analysing your typing style, which in this case  would be the dataset to predict the next words you type. This begs the question, what is Supervised and Unsupervised AI. Simply, Supervised AI has a guideline of what the “correct output values should be” as stated by Google Cloud. However, Unsupervised AI is when AI works independently to analyse the data without a guideline/instruction.  

Ethical debates and implications discussed for AI

Now that we know what AI’s goal is and what is the science behind it, let us discuss the ethical debates about this and the topic of machine ethics. Firstly, moving to schools and pupil reports, many parents are unhappy about AI writing their children’s reports and this raises the question, what would the AI know about their child. This is an example of when humans have become dependent on such AI, and if we are dependent on an object or superintelligence, instead of us controlling this amazing, brilliant, but slightly dangerous invention, this invention could start to control us as we are entirely reliant on it. This also shows that such actions involving the documenting of human behaviour, we should be in control, and making sure that it aids human research, rather than taking over. 

Furthermore, delving into the philosophical ideas of AI, Virtue Theory as introduced by Aristotle, is the idea that though rare, a person may be virtuous, meaning they are at the right place and the right time, and the key to becoming virtuous is to find the Golden Mean. To explain this, let us use an example: you are walking down the street, and you happen to notice an elderly woman walking on the street beside you, who is being mugged. If the offender is the same size as you or smaller, the golden mean is to intervene. However, if such offender is twice your size and built well, the golden mean in this situation is to call on an authority that can deal with this problem, which in this case would be the police. Some people argue that it would be impossible to implement this in AI as one of the things needed to find the Golden Mean is experience, but nevertheless we should aim to create it in all AI. 

Using this logic, we should also try to implement supervised AI, to make sure that this AI is not used in the wrong way, and programmers, developers and companies are starting to realize that this is pivotal to a creation so powerful. Other dilemmas such as the self-driving car problem of what the AI should do in the situation that the car is not able to brake in time, so either the car can go straight, ensuing the death of the passengers, of swerve right into a motorbike or right into a car, both injuring and killing people in each circumstance. Or alternatively, the trolley problem where you are on a train track, and you may take the utilitarian approach – the greatest good for the greatest number – and pull the lever where you will change the course of the track and will kill one person or do nothing and kill five. Of course, if you pull the lever you are deemed to have killed someone willingly or if you take no action five will die. Many also support the idea, in the philosophical world   of machine ethics, geared towards Superintelligence such as James H. Moor’s “The Nature, Importance, and Difficulty of Machine Ethics.” 

All these problems and dilemmas must be considered in the making of AI and the greater population should be informed, especially the programmers and developers designing AI to be fully involved and aware of the guidelines that should be set for AI and we, as everyday people and the consumers of this product should be aware of these problems and future of AI and how to utilize it for the right, not for the wrong.