Nowadays the use of Artificial Intelligence (AI) has been increasing by leaps and bounds. This is happening in most cases even without our knowledge. The smartphone the common people are used to in their everyday communication has many of its features powered by AI. The voice assistant functions, photography and facial recognition, to mention but a few, are AI-driven. The Google searches for any information that researchers, academics or even its most plebeian users make may or may not know that they are using AI.
But then why is the controversy still raging over if AI is going to replace humans at workplace or that it is even becoming a threat to human existence? One may recall here what a great mind of our time, Stephen Hawking, thought about AI. He even warned that the super smart computers (run by AI) one day could spell the end of human race! Hawking said this after he upgraded the system that enabled him to write and communicate despite his inability to perform these tasks due to the motor neuron disease that disabled him. What worried the great physicist was that the AI-powered system he was using could anticipate what he would like to write next. That was an eerie feeling of prescience on the part of the system that was helping him. Hence was his warning for the rest of humanity.
So, many are of the view that if the progress on the AI-front is to continue, the future AI should be taught to be benign and if possible, ethical. The world's richest man, Elon Musk, even donated US$10m to the cause of making AI more beneficial. But not only Musk, Bill Gates, who has become one of the world's richest man through software business, too, expressed his fear about AI's potential to destroy humanity. If the world leader in making computer software has such fear about AI, then can one blame those outside the realm of computer science for their prejudice about the technology that mimic humans?
Consider the open letter drafted by the 'Future of Life Institute', an organisation of which professor Hawking and Elon Musk were advisers. The letter titled, 'Research Priorities for Robust and Beneficial Artificial Intelligence'. An Open Letter', was first circulated among the AI researchers at a conference and then in January, 2015, it was made public. The self-explanatory title of the letter unambiguously point to the benefits society might draw if used conscientiously. It further stressed that researchers must be alert to the possibility of creating a monster that they cannot control. So, developers of the AI algorithms (programs) are also getting scary about the destructive potential of the technology which may surpass humans in intelligence some day! Or has it already?
Remember the IBM's supercomputer, Deep Blue, the chess expert, which defeated chess's then world champion, Gary Kasparov, in the late 1990s? Kasparov, outmanoeuvred by Deep Blue at every move began to think that somehow the machine could read his mind. In fact, the world champion in chess was already psychologically beaten before he finally accepted the defeat. But was that an instance of the rising power of Artificial General Intelligence (AGI), not the special purpose of AI like the Deep Blue? For the Deep Blue was just a chess expert and not one at card game such as bridge or the quiz competition called 'Jeopardy!'. But AGI, as is being imagined, would possess all the faculties of human thought including its intuitive and other abilities. But a super computer like Deep Blue can analyse the enormous volume of data on all possible moves that can be made on a chessboard (millions and millions of them), sift through database on the endless number of opening moves and at the same time examine the data on the history of Kasparov's playing style. The computer can make such analyses in the twinkling of an eye and can predict and pre-empt every move the chess world champion made. This was humanly impossible. But Deep Blue could do the data processing only for chess. It could not do so for other types of games of which it had no memory. So, the general-purpose AI or the so-called AGI, or self-conscious machines pursuing humans to destroy them can be found only in Hollywood movies. Those have been around even before the advent of the digital revolution.
The AI's speciality is it can process and analyse a great amount of data with the help of the set of instructions (algorithm) that asks it do tasks at hand. As it can crunch so huge amounts of data and give results that earlier computers were not capable of, AI seems to have excelled humans in its intelligence. But that is a gross misconception. Like before, those are just digital machines. They still do what they are told to do.
They are not able to think, or reflect the way humans do. Neither can they transfer their expertise in a particular domain of experience to another. If properly used, AI can help solve many issues that humanity is faced with at the moment including in the field of health and medicine, agriculture, space research, climate science, you name it. But problem may arise if the humans giving the AI-run machines instructions have evil motives. In that case, AI at its more advanced level of development could indeed cause great harm to humanity. It all depends on how the powerful would like to use it.