Sci-Tech
a year ago

Christopher Nolan's Oppenheimer and the paradoxical state of Artificial Intelligence

An AI playing a game of chess
An AI playing a game of chess

Published :

Updated :

The author has worked on over 20 different academic and corporate AI research projects and watched the film Oppenheimer. He has also written a four-part science fiction, 'The Sentience,' which compelled extensive research in artificial intelligence. Now, he wants to share his worries about the explosive growth of AI and recommendations for the future.

Some industry experts have dubbed Oppenheimer as the 'Most Important Film of the Century.' Other critics have written news reports and YouTube videos explaining why the film should be viewed in 70mm IMAX theatres. 

But according to the film's director, entertainment wasn't the only ground underscoring this film. And no, the writer is neither getting paid by the film producers nor spoiling the film in the article.

Last week, Christopher Nolan joined a panel of scientists and journalists to discuss the film. At the end of the panel, Nolan was asked whether his film serves as a warning to Silicon Valley investors for pouring an extortionate amount of money into AI technologies. 

Nolan stated that accountability should be considered when innovating technology and that this film acts as a warning for inventors. As AI becomes integrated into defence infrastructure, it may have some control over nuclear weapons, hence the need for caution.

Now take a step back and think for a moment. The growth of applied artificial intelligence is connected to the extinction of the human race. University of Cambridge Press published a report authored by twenty-six experts, citing how AI could transform the landscape for risk. 

The Center of AI Safety published a letter signed by tech moguls and industry experts, including Sam Altman (CEO, OpenAI), Bill Gates, Demis Hassabis (CEO, Google DeepMind) and other AI scientists about the urgent risks of AI. 

This report from the University of Oxford declares a 20% chance of worldwide devastation by 2100 because of AI. It suggests that the path to nuclear-biological 'conflagration amplified by automated weapon systems' might result in 10 times more death than in World War II.

The growth of AI is being accelerated because of VC investment, despite the economic downturn affecting all sectors in the UK. The impact of the post-COVID economic fall is evident through rising interest rates and job terminations. The US is also experiencing a recession, which has not hindered the remarkable growth in AI investment.

In recent months, a few notable investments are Inflection AI, raising USD 1.3 billion. New-York based AI startup Runway has raised USD 141 million, while Typeface gained USD 100 million. 

Most of these companies are hiring in the UK. Meanwhile, Nvidia is expected to make a record-breaking USD 300 million deal with Lambda Labs. Another startup founded by ex-Meta raised USD 113 billion. Salesforce pledged USD 500 million for generative AI startups. It's not hard to understand where I am going with this. Geopolitical spurs and commercial opportunities will eventually outweigh the argument of AI's risk against humanity.

The connection to Oppenheimer's story

In the movie Oppenheimer, there is a scene where the main character, Robert Oppenheimer (played by Cillian Murphy), talks to Lesley Groves (played by Matt Damon) about his invention. At this point, the viewers realise that he has lost control of his project. 

When someone creates a groundbreaking innovation, and that idea turns it into a real-world application, they may eventually recognise that the idea is retained by someone else. Despite the inventor's innocent intentions, their creation may end up being possessed by investors or other entities and may not have the positive impact they had hoped for. In Oppenheimer's case, he was blinded by the lust of his pioneering visions until something went wrong.

Figure 2 - An abstract painting by DALL-E

Evolving computing architecture

In the last decade, computers have significantly enhanced their learning capabilities, with AI technology already present in our daily lives. It is expected that AI will become an essential component of our lives within the next few years, whether we use it directly or not. Although ChatGPT is simply a language model, it has proven incredibly beneficial in my life. Recently, AI applications such as Midjourney and DALL-E have emerged, capable of generating art within seconds, while others can create music with minimal input. But the architecture is changing soon.

As someone who has recently delivered a project on a long-range quantum communications protocol using photon entanglement, it would be wise to say that the world is yet to see the power of a quantum computer.

For instance, a recent study by Google claims that their quantum supercomputer is 47 years faster than the most powerful traditional supercomputer.

Suggestions demanding the light of attention

The world is currently in a state of paradox. While AI comes with its danger, thwarting its progress could hinder humanity's advancement. In light of this discussion, there needs to be a set of recommendations that can reduce the impact. But there are several challenges.

Policy: There is a lack of policies regarding regulating AI development worldwide. Prioritising policies requires thorough research and consultation with both technical experts and users, who can share their interests and concerns. As policies are being drafted, policymakers must remain agile and adaptable as AI constantly evolves, and its definition will change. Policymakers need to find ways to keep up with this dynamic growth to ensure effective regulation.

Sandboxing: Experimental AI must be tested and controlled within a supervised environment. It should only be out in the open-source world or usable for the public unless it is rigorously tested.

Forecast: To prepare for the inevitable collapse, it's important to have actionable groups that can translate numerical statistics into effective defence strategies. After a product is released, independent researchers and governing bodies should continuously monitor its use. If an open-source or commercial AI is found to be a threat, the data collected can be used as evidence to regulate its growth.

Monopoly: There are rules against commercial domination in every country, but AI monopoly could result in catastrophic consequences. One corporation or entity cannot be given control over the powerful AI in the world.

Law: Although certain parts of the world can be dangerous to visit, places like the UAE are where people feel comfortable leaving their cars and gadgets in public areas. Implementing strict laws ensures the safety of the general public in their day-to-day lives. This also applies to the development of potentially rogue AI.

Coping with Hazards: Unlike nuclear research, AI cannot be restricted. Many are open-sourced, allowing developers to build powerful commercial applications. Countries should unite to bring a pool of talents to find the balance between cynical predictions and cautious optimism.

Training: People will be out of jobs because of AI. Hence there should be mechanisms in place so that those people can be trained to understand different aspects of AI, from the risks to its applications.

Hard Boundaries: It is crucial to set clear limits on what tasks AI should be allowed to perform, particularly concerning weaponised systems controlled by AI. Only when AI has been sufficiently trained to understand the ethical considerations involved in decision-making, strict boundaries must be in place to prevent the development of unruly AI systems. Developers must be restricted in what they can create using AI, and any product which cannot be held accountable should not be developed at all.

Anything that happens will require an incomprehensible amount of cooperation among startups, corporate, academia and the government. Being someone who worked on both sides of the spectrum, I can say that making this happen will be complicated, and that's an understatement. We can't wait until a disaster happens and then find a cure because that's what the geo-political event of the past is signalling.

And if you want to see what history teaches us, go and watch Oppenheimer.

Farabi Shayor is a 2X author, consultant and a scientist recognised by the Science Council in the UK. He's a British resident of Bangladeshi lineage, and works with both the public (government) and private sector providing technology consulting services.

Share this news