What you need to know to combat the deepfake menace

Rising use of deepfake technology in revenge porn creates serious concerns about how to tackle the beast. Awareness could be the key.

In May this year, the ‘deepfake’ controversy took a grim turn and hit closer home, when AI-generated morphed photos of a class 9 student from a prominent public school in Bengaluru, was circulated on an Instagram account. The parents lodged a complaint with the cyber crime cell. This incident raised concerns about the growing threat and damaging effects of deepfakes, particularly revenge porn, on young adults.

“Even as there were fears about deepfakes being used to subvert elections, it didn’t pan out that way. Of greater concern is that 95-96% of deepfakes are used for pornography,” says Jaspreet Bindra, founder of AI&Beyond and author of The Tech Whisperer.

In a survey conducted by global software company McAfee this year, 52% said they were concerned about deepfake-generated pornographic content. Additionally, 75% of Indians surveyed said that they had seen some deepfake content over one year and about 38% have even came across deepfake scams. Clearly, the issue is of concern among parents of young adults who want to protect their children from such incidents.

What are deepfakes?

Karan Saini, a cyber security expert, explains that “deepfake” is the colloquial term used for face swapping in videos, powered by technology using deep learning (ML) and Generative Adversarial Network (GAN), a particular type of unsupervised learning neural network. The latter works on the basis of an adversarial relationship between its two core components (the generator, which generates-text, images etc, and the discriminator, which checks the generator’s work to see if it is coherent).

This can also involve voice cloning technology based on deep learning, where the subject’s mouth will appear to be in sync with the words spoken by the cloned voice.

In layman’s terms, what this results in are fake videos, audio or images of real or non-existent people saying and doing things they never said or did. So if you have come across a recent video of Virat Kohli claiming that despite all the talent that junior fellow cricketer Shubman Gill may have, there’s none yet to match his own legacy, you have already seen a deepfake.

There may also be legitimate use cases of deepfakes such as for commercial games or film purposes. Yet, deepfake software is often marketed and sold as a tool for explicit non-consensual imagery, which refers to photos and videos that are sexual in nature, created and circulated without the consent of the individual.

The evolution of deepfakes

It is not just celebrities who fall prey to deepfake manipulation. Its proliferation over the years has been fast and furious and as the Delhi High Court rightly observed at a recent hearing, it is going to be a “serious menace” to society at large.

Jaspreet says that deepfakes began roughly in 2011 but with the democratisation of generative AI, it has become more widely used.

Earlier, deepfakes used to be niche, and tools were not that easily accessible, says Karan. “One needed a lot of video footage to create a deepfake. But now, it requires lesser computing power, you only require a singular photo from one angle. With the progress of technology, there is open source software where you can download tools. Access has also increased.”

Is prevention possible?

Unfortunately, as Karan explains, deepfakes are not something you can really protect yourselves from.

“People aren’t falling into this on a mass scale yet. It is being used against individuals for perverse harassment and revenge porn, but in future, it might become profit-based, so one needs to be careful. It is more of a societal issue. The harasser uses the stigma around it, but if people remove the stigma then it will be easier to handle such cases.”

Jaspreet too agrees that this phenomenon is not something that can be easily controlled. “It is the same as a virus-anti-virus cycle.”

People can take some preventive measures, though these may not be foolproof. Keeping social media accounts private and sharing images or videos only with trusted contacts is also recommended and reduces risk.

Both Karan and Jaspreet however say that the onus should not be on the individual, platforms need to be more vigilant. “Distribution of deepfakes is the problem. Parents can only monitor their children’s internet usage,” says Jaspreet.

Spotting a deepfake

Karan says that there are a few tell-tale signs of deepfakes, for example, the edges around the face could appear jagged, or there could be a mismatch in skin complexion. But with increasingly sophisticated technology, these can be edited; so it becomes difficult to detect them.  

“There are no detection tools as such available online, but Facebook and YouTube that host video content have a detection method for deepfakes.”


Read more: Fight hydra-headed cybercrime: Exercise caution, register complaints quickly


Representative image of internet addiction
The average internet user spends roughly four hours per week on social media. In contrast, those grappling with addiction dedicate a staggering 38.5 hours weekly to these platforms. Pic: Gunjan Sharma

Deepfake videos targeting women in South Korea has become an issue of major concern. President Yoon Suk Yeol asked authorities to take action in eradicating the digital sex crime, as reported in the BBC.

There is no single law to tackle non-consensual deepfake creation and Jaspreet adds that the regulations are not strong enough. The only concrete way, as of now, to counter the negative effects of deepfake is to create awareness. Jaspreet says that awareness should not be dismissed. “Its power cannot be underestimated. Eradication of polio, decline of population growth was due to awareness.”

There are however provisions within law that may be applied in cases of malicious use of deepfake videos or images. For example, Section 66 E of the Information Technology Act, 2000 (IT Act) stipulates punishment of imprisonment of three years and a fine of Rs 2 lakh for violation of an individual’s privacy by publishing or transmitting the image of a private area without their consent.

Similarly Sections 67, 67A and 67 B of the IT Act prohibits and prescribes punishments for publishing or transmitting obscene material comprising sexually explicit acts and children depicted in such acts in electronic form.

Aggrieved individuals can file an FIR at the nearest police station and seek remedies under the application of rules of the IT Act as applicable.

The Ministry of Electronics and Information Technology in its advisory, dated November 7, 2023, directed the relevant social media intermediaries to remove any deepfake content within 36 hours of such reporting.

Taking cognizance of deepfakes, last year, the Bengaluru city police issued a special helpline number 1930 for reporting deepfake cases.

Also read:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Similar Story

‘Banni Nodi’: How a place-making project is keeping history alive in modern Bengaluru

The Banni Nodi wayfaring project has put KR market metro station at the heart of a showcase to the city's 500-year urban history.

KR market metro station is more than a transit hub in Bengaluru today, as it stands at the heart of a project that showcases the city's 500-year urban history. The Banni Nodi (come, see) series, a wayfinding and place-making project, set up in the metro station and at the Old Fort district, depicts the history of the Fort as well as the city's spatial-cultural evolution. The project has been designed and executed by Sensing Local and Native Place, and supported by the Directorate of Urban Land Transport (DULT) and Bangalore Metro Rail Corporation Limited (BMRCL).  Archival paintings, maps and texts,…

Similar Story

Wounds of cyber abuse can be deep, get expert help: Cyber psychologist

Cyber psychologist Nirali Bhatia says that parents, friends and relatives of sufferers must not be reactive; they should be good listeners.

As technology has advanced, cyber abuse and crime has also increased. Women and children are particularly vulnerable, as we have seen in our earlier reports on deepfake videos and image-based abuse. In an interview with Citizen Matters, cyber psychologist, Nirali Bhatia, talks about the psychological impact on people who have been deceived on the internet and the support system they need. Excerpts from the conversation: What should a person do, if and when they have fallen prey to a deep fake scam or image abuse? We need to understand and tell ourselves it is fake; that itself should help us…