In May this year, the ‘deepfake’ controversy took a grim turn and hit closer home, when AI-generated morphed photos of a class 9 student from a prominent public school in Bengaluru, was circulated on an Instagram account. The parents lodged a complaint with the cyber crime cell. This incident raised concerns about the growing threat and damaging effects of deepfakes, particularly revenge porn, on young adults.
“Even as there were fears about deepfakes being used to subvert elections, it didn’t pan out that way. Of greater concern is that 95-96% of deepfakes are used for pornography,” says Jaspreet Bindra, founder of AI&Beyond and author of The Tech Whisperer.
In a survey conducted by global software company McAfee this year, 52% said they were concerned about deepfake-generated pornographic content. Additionally, 75% of Indians surveyed said that they had seen some deepfake content over one year and about 38% have even came across deepfake scams. Clearly, the issue is of concern among parents of young adults who want to protect their children from such incidents.
What are deepfakes?
Karan Saini, a cyber security expert, explains that “deepfake” is the colloquial term used for face swapping in videos, powered by technology using deep learning (ML) and Generative Adversarial Network (GAN), a particular type of unsupervised learning neural network. The latter works on the basis of an adversarial relationship between its two core components (the generator, which generates-text, images etc, and the discriminator, which checks the generator’s work to see if it is coherent).
This can also involve voice cloning technology based on deep learning, where the subject’s mouth will appear to be in sync with the words spoken by the cloned voice.
In layman’s terms, what this results in are fake videos, audio or images of real or non-existent people saying and doing things they never said or did. So if you have come across a recent video of Virat Kohli claiming that despite all the talent that junior fellow cricketer Shubman Gill may have, there’s none yet to match his own legacy, you have already seen a deepfake.
There may also be legitimate use cases of deepfakes such as for commercial games or film purposes. Yet, deepfake software is often marketed and sold as a tool for explicit non-consensual imagery, which refers to photos and videos that are sexual in nature, created and circulated without the consent of the individual.
The evolution of deepfakes
It is not just celebrities who fall prey to deepfake manipulation. Its proliferation over the years has been fast and furious and as the Delhi High Court rightly observed at a recent hearing, it is going to be a “serious menace” to society at large.
Jaspreet says that deepfakes began roughly in 2011 but with the democratisation of generative AI, it has become more widely used.
Earlier, deepfakes used to be niche, and tools were not that easily accessible, says Karan. “One needed a lot of video footage to create a deepfake. But now, it requires lesser computing power, you only require a singular photo from one angle. With the progress of technology, there is open source software where you can download tools. Access has also increased.”
Is prevention possible?
Unfortunately, as Karan explains, deepfakes are not something you can really protect yourselves from.
“People aren’t falling into this on a mass scale yet. It is being used against individuals for perverse harassment and revenge porn, but in future, it might become profit-based, so one needs to be careful. It is more of a societal issue. The harasser uses the stigma around it, but if people remove the stigma then it will be easier to handle such cases.”
Jaspreet too agrees that this phenomenon is not something that can be easily controlled. “It is the same as a virus-anti-virus cycle.”
People can take some preventive measures, though these may not be foolproof. Keeping social media accounts private and sharing images or videos only with trusted contacts is also recommended and reduces risk.
Both Karan and Jaspreet however say that the onus should not be on the individual, platforms need to be more vigilant. “Distribution of deepfakes is the problem. Parents can only monitor their children’s internet usage,” says Jaspreet.
Spotting a deepfake
Karan says that there are a few tell-tale signs of deepfakes, for example, the edges around the face could appear jagged, or there could be a mismatch in skin complexion. But with increasingly sophisticated technology, these can be edited; so it becomes difficult to detect them.
“There are no detection tools as such available online, but Facebook and YouTube that host video content have a detection method for deepfakes.”
Read more: Fight hydra-headed cybercrime: Exercise caution, register complaints quickly
Deepfake videos targeting women in South Korea has become an issue of major concern. President Yoon Suk Yeol asked authorities to take action in eradicating the digital sex crime, as reported in the BBC.
Legal options
There is no single law to tackle non-consensual deepfake creation and Jaspreet adds that the regulations are not strong enough. The only concrete way, as of now, to counter the negative effects of deepfake is to create awareness. Jaspreet says that awareness should not be dismissed. “Its power cannot be underestimated. Eradication of polio, decline of population growth was due to awareness.”
There are however provisions within law that may be applied in cases of malicious use of deepfake videos or images. For example, Section 66 E of the Information Technology Act, 2000 (IT Act) stipulates punishment of imprisonment of three years and a fine of Rs 2 lakh for violation of an individual’s privacy by publishing or transmitting the image of a private area without their consent.
Similarly Sections 67, 67A and 67 B of the IT Act prohibits and prescribes punishments for publishing or transmitting obscene material comprising sexually explicit acts and children depicted in such acts in electronic form.
Aggrieved individuals can file an FIR at the nearest police station and seek remedies under the application of rules of the IT Act as applicable.
The Ministry of Electronics and Information Technology in its advisory, dated November 7, 2023, directed the relevant social media intermediaries to remove any deepfake content within 36 hours of such reporting.
Taking cognizance of deepfakes, last year, the Bengaluru city police issued a special helpline number 1930 for reporting deepfake cases.