When most of us think about artificial intelligence (AI), our minds go straight to robots, sci-fi thrillers where machines take over the world. But the fact is that AI is already existing among us. It exists on our smartphones, fitness trackers, refrigerators, etc. It can drive cars, trade stocks and shares, cook simply by watching YouTube videos, etc. And this is just the beginning.
The machines are now programmed to think like human beings and mimic the way humans act. The ideal characteristic of AI, which is different from conventional software programmes, is its ability to learn and rationalise on its own, and then, when required, take actions that have the best chance of achieving a specific goal. On the contrary, in a conventional software programme, the developer is obliged to define all the ways the programme could follow during its life to solve specific problems.
For AI systems to work, huge quantity of data is required. This is the reason why, though AI as a field of study has been around for close to 60 years, the shortage of data for much of that period, combined with limits in computational power, has constrained AI's growth until recently. Today, with the explosion of the world wide web and social media, and the relative ease and affordability of Internet connection, the amount of data being generated and information being digitised has got a quantum leap, setting the stage for AI to become a disruptive force across the global economy.
According to a PwC report, AI could contribute up to US$ 15.7 trillion to global GDP in 2030, with US$ 9.1 trillion coming from consumption-side effects and US$ 6.6 trillion coming from increased productivity. For context, that would add about 14 per cent to global GDP, or more than China and India's combined output.
Unfortunately, everything has its pros and cons. Recognised for the wealth of promising opportunities it could open up, AI also brings with it a number of ethical dilemmas and threats. For example, critics have pointed out that while our smart devices are designed to make our life easier and healthier, they are also capable of working toward several micro and macro goals that benefit their makers and designers rather than us - the user, even if it is us who own the smart devices in question.
An important issue that is being heavily discussed around the world at present is how technologies are putting our jobs in jeopardy. Thanks to dual advances in both AI and its sibling field of robotics, automation is now sweeping across more industries than ever, putting a lot of manual labourers out of work. But the most unexpected shift when it comes to AI's impact on employment, though, is what it means for the white-collar jobs that do not require manual labour. AI has already made inroads carrying out many of the information-based tasks that are traditionally the domain of high-cognition professionals like doctors or lawyers, even high-level executives.
Other ethical problems that should be duly noted here include potential bias/discrimination in decision-making processes involving AI (can you realistically appeal an AI-made rejection of your mortgage application?), and liability in technology-induced accidents (who is responsible in accidents caused by autonomous vehicles?). Or, even though this may sound far-fetched to many of us, could AI have civic rights?
There are also a couple of pertinent points from the perspective of consumer law and policy. The synergy between AI and big data helps to enhance the power of businesses and their dominance over consumers. AI systems can use big data to anticipate consumer behaviour and to try and trigger desired reactions. As a result, consumers can be outwitted, manipulated, and induced into suboptimal purchases or other unwitting choices.
Our privacy is also affected by these phenomena in multiple ways: consumer data are continuously collected by on-line and off-line consumer behaviour tracking, they are stored and merged with other data sources, and processed in order to elicit further information about consumers though profiling. The resulting indications about consumer attitudes, weaknesses and propensities are deployed in decisions affecting individual consumers, or in attempts to influence their behaviour.
Many of us are perhaps aware of the question whether or not AI poses some existential risk to humanity. As the renowned physicist Stephen Hawking wrote in May 2014, "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who control it, the long-term impact depends on whether it can be controlled at all." And that brings us to the final question: are we prepared?
Udai Singh Mehta is deputy executive director at CUTS International. firstname.lastname@example.org
© 2017 - All Rights with The Financial Express