Sunday 1 September 2024

Are We Drifting Towards an AI Dystopia?

There are indications that we are indeed headed towards an AI Dystopia. Excited people extol the functionality of artificial intelligence without really understanding the fact that handing over important decisions to AI might not be ethical and morally correct! Allowing AI to make decisions on our behalf might not be ethical and morally correct in most circumstances. Take, for example, allowing AI to decide who is to be killed on the battlefield, or perhaps even allowing AI to make decisions about whom one should marry or what we should eat and what not to eat! Today's wars are fought through proxy. Drones are remote-controlled vehicles that fly more or less autonomously and, when they recognize the 'enemy', they dive into the enemy destroying the enemy with a shaped charge of explosives. This has been seen in the Russia-Ukraine war. Many drones are purportedly AI-controlled and they wreak havoc on the other side. The Gaza-Israel conflict has seen the large use of AI-controlled drones. Accidents happen and the human scapegoat is hunted, but then what if AI cannot make mistakes, whom will we blame? Well if the executive order is to eliminate a well-known 'pirate' and supposing the pirate is accompanied by innocent people, will the AI-controlled drone be able to disengage and return to base to fight another day? Does AI understand the meaning of collateral damage? Does it have sentience enough to understand that the executive command it has been given might lead to greater destruction than planned? 

Experience has shown how students copy-paste information given by AI into their project and research articles in toto. This has affected the creativity of students, and called into question the very purpose of doing research, or even submitting an original article. Whatever happened to the rules of plagiarism? Are we therefore so dependent on AI that we rely on it to create our thoughts for us? The reliance on AI research work, and the writing of articles and essays proves that we have abdicated our research skills to AI! Where, then, does human intelligence and creativity lie in the realm of a world governed by artificial intelligence?

We need to caution all those who use artificial intelligence to at least use their own rationality and common sense about how much artificial intelligence they should use in all the work they produce. On the battlefield, the question of eliminating targets is bound to be ruled by the concept of conscience. Would you target a person travelling in a car accompanied by innocent family members? Who takes the responsibility of targeting a person of military importance even though it would entail collateral damage? Would you like to abdicate the decision to AI? Where is the conscience of killing innocent people in an air strike at a military target? Does AI have the capability of understanding the rules of humanity? One of the major concerns of using AI on the battlefield is the lack of awareness of the loss of innocent lives, the so-called collateral damage incurred in an act of war.

Abdicating all responsibilities to AI for one’s actions can only result in an acceptance of a rule by AI that leads to a dystopia where human beings do not have the option of making important decisions. One cannot abdicate the moral responsibility of making important decisions by allowing AI to take-up the same. One should learn to use AI and not allow AI to use us! AI singularity assumes a hypothetical scenario where artificial intelligence is more intelligent than humans. If machines become more intelligent than humans then in that case, they will have achieved a level of intelligence that we humans can never hope to achieve. Imagine a scenario where artificial intelligence deems human beings to be detrimental to the continuing well-being of the planet and thus it decides to lobotomise all humanity.

Another important factor that might be a matter of concern is sentience. AI is not as yet sentient, though, perhaps it is a matter of time before we have a sentient AI! Sentience refers to the ability to process emotions and to perceive the world from a human being's point of view. This, is incidentally, an important quality that human beings have but AI does not have. Presently, AI does not have a conscience, and it can only do what it has been commanded to. This is a frightening scenario especially when dictators, depots and madmen decide to use AI to eliminate so-called opponents with the help of drones and missiles without considering the collateral damage that such a decision would result in! A non-sentient AI would not be able to question the ethics of an executive decision that would result in the loss of innocent lives - collateral damage as they would call it!

Control over human individuality, choices and decisions is already taking place. We are already dictated by algorithms on our choices in mobile phones, clothes, and even life partners! How much more can we allow AI to rule our lives? Several science fiction novels have been written and movies have been made on topics dealing with what might be an AI dystopia. Among these are George Orwell's Animal Farm, Aldous Huxley's A Brave New World and the Sci-Fi film The Matrix.

A chilling thought that comes to mind is that what if, AI decides that humans are not good for the health of the planet and thus need to be tamed? What If AI decides that humans need to be wiped off the face of the earth because they have done more harm than good to the planet? Could AI make executive decisions on its own? Scientists are confident that emergency kill switches, safety mechanisms and algorithms can prevent AI-based machines from going rogue. But then we all know, that accidents can happen and they do happen!

Humanness refers to individuality and each individual has a unique personality. One yardstick doesn't work on all humans. The advent of AI is slowly but surely robbing us of this humanness, we are all becoming bricks in a wall, nodes in a matrix, mere cogs in a huge wheel that is part of a machinery run by an alternative intelligence.