Columns
8 months ago

Where AI needs to be regulated

Published :

Updated :

Should Artificial Intelligence (AI) be regulated? Heated debate has been ongoing on the issue. In many cases, the fear of AI's potential for not only equalling humans in intelligence, but even surpassing them lies behind the call for regulating the use of this latest addition to digital technology. But this is one side of the debate over regulating AI and its use. The other side is a political one centring around protection of the new advancements in research and development of the technology so the rivals may not get their hands on those. Meanwhile, AI's use is expanding at a breakneck speed in every field of human endeavour. Given its implications for national security, military use, business competitiveness and   individual privacy, the urgency of regulating AI's use is being felt by every nation, especially the advanced ones. The European Union, for instance, on Friday, December 8 last did reach a provisional agreement, the world's first ever, on the comprehensive laws to regulate AI. Actually, the agreement is about passing the EUAI Act, which was proposed by the European Commission (EC), the EU's executive arm, in 2021. The EU's approach to regulating AI systems is classified into four categories of risk. Each category of risk will come under its specific regulations.

However, defining the different AI systems and the risks posed by those are yet another area of challenge that the regulations will come up against when implementing those.

For instance, AI that falls under the EU's highest risk category, termed 'unacceptable risk', is banned with some exceptions. Those include social scoring that discriminate people based on their behaviour, socio-economic status, or the real-time biometric identification such as facial recognition. The EU's second category, on the other hand, is 'high risk' AI that can be considered for commercial release subject to its pre-and-post-marketing evaluations. The evaluations include thorough examination, documenting of data quality, an accountability framework including human oversight. Similarly, among the high risk services include autonomous vehicles, medical devices, critical infrastructures, education and government services. Before their release in the market, the high-risk AI providers will be required to register their AI products with the EU database managed by the EU Commission (EC) The 'limited risk' AI systems, on the other hand, include those, the transparency obligations of which are limited and users can make their informed choice about them. In this category fall creating or manipulating images, audio and video. The fourth, the low or minimal risk category AI systems have no obligations.

In this context, the Friday's approval of the EUAI Act took place after hours of negotiation between the EU parliament members and the EU nations. Thus, the landmark deal clears a major hurdle before it is formally made into a law. However, as always, the devil is in the detail. Before finalising the legislation, the details of the deal will have to be sorted out and that may change the final shape of the accord. The main issues of contention that the EU agreement so reached would cover are about meeting transparency obligations by the foundation models ChatGPT and general purpose AI systems (GPAI) before their commercial release. Notably, a foundation model is a deep learning algorithm that has been pre-trained on an enormous amount of data sets and the data are gathered from the public internet.

But the approval of the EUAI Act has not been well-received by the businesses. Their points of disagreement are over restricting the technology, especially the foundation models, though they supported the risk-based approach to regulate AI's use. The privacy rights groups like the European Digital Rights have also expressed their dissatisfaction over the EU accord on AI legislation, especially its move to 'legalise live facial recognition across the (EU) bloc'.  Previously, too, EU's business community in an open letter expressed their reservation about the move towards legalisation of AI arguing that it would jeopardise Europe's competitiveness and technological sovereignty.

Once given formal shape as a law, the EU accord on AI will be able to monitor enforcement of the law through its EUAI office and, where necessary, penalise violators of the law with a fine ranging from 7.5 million euros or 1.5 per cent of turnover to 35 million euros or 7.0 per cent of global turnover. Citizens will also have the right to file complaints against AI provider.

The EU's approach to AI legislation is a prescriptive, top-down one focused on restricting such uses of AI as are perceived to be risky.  Another big player of AI world, China, is for reviewing the algorithms in advance by the state to ensure that they are in line with the socialist principles. In the land of high-tech giants, the US, the approach to regulation, if any, will not be like a national law, but a decentralised, bottom-up one.  As the country has a host of executive branches to frame regulations, the US regulatory framework on AI would look like a patchwork of domain specific agency actions. Those are going to be less controversial rules based on targeting AI-research funding, health care, etc. The domains might include financial services, housing, workforce, child safety, healthcare and so on. Obviously, the US government will allocate most funding to AI research, innovation, defence, intelligence. Of course, the US government's spending behaviour on AI research in the private sector is going to shape, in a large measure, the global AI market.

Overall, any regulatory policy on AI, once adopted by the USA, will have a huge impact on the EU's AI policy. If businesses have their own agenda about 'responsible AI, we are going to see a messy global regulatory landscape in the future.

Let's keep our fingers crossed.

 

[email protected]

Share this news