Columns
2 days ago

When seeing is not believing

Published :

Updated :

Besides amending existing cybersecurity laws, Bangladesh 

could take lessons from advanced countries 

that are trying to regulate AI usage

 

writes

Syed Tashfin Chowdhury

 

As Artificial Intelligence-based software and tools continue to take over many aspects of professional and personal lives, concerns have been raised by AI experts as well as authorities in various advanced countries regarding the ethical usage of AI. A niche area within this broad subject is media manipulation-specifically, the manipulation of images, videos, audio clips etc. Unscrupulous individuals with knowledge of AI tools develop and spread doctored or fabricated materials through social media platforms and other forums.

As AI continues to progress at a rapid pace, these techniques are also evolving. It is imperative for relevant government departments to understand the tools and mechanisms at play so that they can be brought under the purview of cybersecurity laws and effective steps can be taken against perpetrators.

AI-GENERATED IMAGES: Around the end of last year, some photos of a popular Bangladeshi film actor went viral on social media platforms. Some users shared the content without thinking much. Soon, she had to declare that the photos were AI-generated. Someone had taken her photos and used AI to create new images. As a result, the photos showed her scantily clad on a rooftop.

This can be done quite easily using at least four or five different tools, including Midjourney and DALL-E. These can be used to add, remove, or change elements in an actual photo. Photoshop's Generative Fill also uses AI to edit existing photos.

As a result, it becomes difficult for anyone seeing such images online to tell whether they are real or doctored. This leads to the spread of misinformation, causing panic and resulting in financial and emotional disruptions.

Earlier this week, AI-generated images of wildfires in British Columbia caused panic among local communities in Canada. The B.C. Wildfire Service quickly urged concerned families and individuals to download their app, sign up for alerts, and trust only credible media sources.

DEEPFAKES: One of the most baffling stories of deepfake deception is that of internet sensation Babydoll Archi from India. At one point this year, Archi's Instagram account had over 1.4 million followers.

The reason behind her popularity was quite obvious. The account posted videos and images of Archi. In one video, she was dancing to a popular song; in another, she appeared to be standing with an American porn star. This led to rumours that Archi might be joining the American porn industry.

In reality, Archi never existed. Her likeness was created using AI, based on images of a homemaker from Assam. This woman's ex-boyfriend, Pratim Bora, used her images after they had broken up in 2020.

Bora, an artificial intelligence enthusiast, created the Babydoll Archi profile. The first uploads to the account were made in 2021, with more content posted in 2023 and onwards.

Initially, the victim's family was unaware. Later, after they found out, her brother filed a complaint with the police, who arrested Bora. Senior police officer Sizal Agarwal told the BBC last month that Bora used tools such as ChatGPT and Dzine to create an AI version of Babydoll Archi. He then populated the handle with deepfake photos and videos. Though the account started picking up likes last year, it began gaining traction from April this year, Agarwal added.

For deepfake videos, AI models-especially deep learning techniques like Generative Adversarial Networks (GANs)-are trained on numerous images or videos of a person. The AI learns how their face moves and sounds, which can then be overlaid onto another video.

Besides being used to exact revenge on ex-girlfriends and boyfriends, deepfakes have recently been used in many countries ahead of elections to confuse voters and sway public opinion.

FACE SWAPPING: You've probably come across videos of influencers enacting popular or funny lines from politicians, actors etc., on their social media platforms. Their faces look almost like the person they're imitating in those videos. This feature is available on TikTok and Snapchat as filters. AI maps facial features and blends them seamlessly. Though influencers use this to entertain followers, the same technology can easily be used to vilify a person and falsely blame them for something they haven't said or done.

Just this year, a scammer used live face-swapping technology during video calls with an older woman. The scammer impersonated a younger, attractive man and tricked the woman into a romantic relationship.

In China, a businessman was tricked into transferring over 4.3 million yuan (approx. $622,000) during a video call with someone he thought was his associate. The associate's face had been swapped by scammers using facial landmark detection and lip-syncing to make the fake face look natural during the Zoom call. The victim only realised the fraud after the real associate informed him that no such conversation had taken place.

VOICE CLONING / AI VOICE SYNTHESIS: Recently, there have been cases of businessmen receiving phone calls from supposed politicians, senior police officials, and others demanding money. Their voices sound exactly like the real person-even the mannerisms are the same.

AI models are trained on audio recordings of a person's voice. Once trained, the model can generate speech in that voice from any text. It's that simple for tech-savvy scammers and criminals to make money these days!

TEXT-TO-IMAGE AND SYNTHETIC MEDIA: You may have come across videos on Facebook where a news reporter is reporting from an unknown location about potholes on Dhaka's streets, and in the background, a motorcycle rider speeds toward a pothole and drowns in the water.

Such videos or images can be generated from written descriptions.

Synthetic media is slightly more advanced. AI enthusiasts can combine their knowledge of voice cloning, deepfakes, and AI-generated images to make a whole situation look believable.

There have been numerous incidents in recent times. One that comes to mind is the audio clip of British Prime Minister Keir Starmer being verbally abusive to his staff. The clip was viewed over 1.5 million times in 2023 and is suspected to have been generated using synthetic media for political manipulation.

SOCKPUPPETS: 'Sockpuppets' are fake online identities created by AI and used for various illegal gains. For example, it is widely believed that during the 2016 US election, thousands of sockpuppet accounts were created online. These accounts, posing as Americans, spread divisive content to influence voters.

Our government departments need to catch up with such AI tools and how fast they are evolving. Besides amending existing cybersecurity laws, Bangladesh could take lessons from advanced countries that are trying to regulate AI usage. For instance, Denmark has proposed new deepfake legislation as part of its digital copyright law. It aims to protect individuals' rights from the impact of AI-generated deepfakes, including both deepfakes of artists' performances and the characteristics of natural persons. Though the proposed amendment to the Copyright Act is likely to affect enterprises' use of the technology, it will also offer protection against misuse.

 

Syed Tashfin Chowdhury is a communications professional. tashfinster@gmail.com

Share this news