Category : COVER STORY

In our cover story, Maliha explores how
deepfakes work and their risks and consequences,
particularly for women.

Credit: DW

Deepfakes have been around since 2017. Deepfake AI is a type of artificial intelligence used to create convincing images, as well as audio and video hoaxes. The term describes both the technology and the resulting bogus content and is a portmanteau of deep learning and fake.
Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say. Fast forward to 2023, deepfake tech, with the help of AI tools, allows semi and unskilled individuals to create fake content with morphed audio-visual clips and images. A.I. software, which can easily be purchased online, can create videos in a matter of minutes and subscriptions start at just a few dollars a month.
Researchers have observed a 230% increase in deepfake usage by cybercriminals and scammers, and have predicted the technology would replace phishing in a couple of years. The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019, there were 7,964 deepfake videos online, according to a report from startup Deep Trace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then.

Deepfakes also have the power to make people believe they have seen something that never actually happened, due to the highly realistic nature of the technology. This could ultimately cause them to misremember an untrue scenario as fact. Here are a few examples of how deepfakes can contribute to the Mandela Effect and the dangers that come with them: Creating fake news stories, swaying political opinion, driving propaganda campaigns, altering historical footage, manipulating social media content, fabricating scientific evidence, and creating false alibis. Apart from photo and video morphing, deepfakes have also been used to potentially incite political violence, sabotage elections, unsettle diplomatic relations, and spread misinformation. This technology can also be used to humiliate and blackmail people or attack organisations by presenting false evidence against leaders and public figures.

Deepfakes have now become a gender issue as women are facing more dangers from it. It has put a threat to women’s privacy on social media.
Just recently, a deepfake video featuring Indian actress Rashmika Mandana surfaced, adding to the list of celebrities who have fallen victim to such manipulated content. On Monday, the deepfake video went viral on X, formerly Twitter, and multiple other social media platforms. In the video, her face had been spliced on top of a British-Indian woman, Zara Patel’s body. Other influential and non-influential women have also felt threatened by the upcoming technology.
Mandanna tweeted, ”I feel hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly extremely scary not only for me but also for each one of us who today is vulnerable to so much harm because of how technology is being misused. Today, as a woman and as an actor, I am thankful for my family, friends and well wishers who are my protection and support system. But if this happened to me when I was in school or college, I genuinely can’t imagine how I could ever tackle this. We need to address this as a community and with urgency before more of us are affected by such identity theft.”
Morphed photos and videos of women, especially famous women, being circulated online aren’t a new phenomenon. They have existed since the advent of the internet. With AI-based tools, what has changed is the ease, speed and finesse with which a layperson can make realistic deepfakes which both look and sound genuine.
Deepfakes are fast becoming a problem and are used by threat actors to spread misinformation online. However, there are laws which can be invoked to deter threat actors from creating deep fake videos. India’s IT Rules, 2021 require that all content reported to be fake or produced using deep fake be taken down by intermediary platforms within 36 hours. Since the deepfake videos of Rashmika Mandanna went viral, the Indian IT ministry has also issued notices to social media platforms stating that impersonating online was illegal under Section 66D of the Information Technology Act of 2000. The IT Rules, 2021, also prohibit hosting any content that impersonates another person and requires social media firms to take down artificially morphed images when alerted.
It’s worth noting that deepfake technology is becoming increasingly sophisticated, making it more challenging to detect fake content. This underscores the importance of developing better detection methods and using AI ethically. As technology continues to advance, it is essential to stay vigilant and cautious when consuming media.

How to spot deepfakes:

(Credit: Spiceworks)

To identify deepfake videos, pay attention to visual and audio inconsistencies, along with other telltale signs:

Facial expressions and anomalies:

Look for unnatural facial expressions, mismatched lip-sync, or irregular blinking.

Audio discrepancies:

Listen carefully for shifts in tone, pitch, or unnatural speech patterns when in doubt about a video’s authenticity.


Examine the video for visual distortions, blurring, or inconsistent lighting. Consider whether the person in the video could realistically be in that setting.

Context and content:

Analyse whether the behaviour or statements in the video align with the individual’s known characteristics..

Verify the source:

Confirm the credibility of the media by checking its source. Is it coming from a reliable and reputable source, or is it being posted by a random YouTuber or a social media account? Such accounts may raise suspicions.
According to the cyber security expert, the government should amend existing laws to specifically address the unique challenges posed by deepfakes. The government should support the development of more sophisticated detection tools that can be used by authorities and the public.
With few laws to manage the spread of the technology, disinformation experts have long warned that deepfake videos could further sever people’s ability to discern reality from forgeries online.
As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.”
In terms of psychological impact, such an act causes fear amongst women and this kind of online violence deplatforms women from online spaces. Targeting women in this manner, not only has an impact on mental health in terms of emotional and psychological stress, it can also have an economic impact and cause women to lose their jobs because of perceived reputational harm.
Deepfakes can be weaponised for character assassination and to tarnish reputations. By convincingly superimposing an individual’s face onto fabricated situations, malicious actors can craft misleading narratives that may damage personal and professional relationships. Women in positions of influence or public life are especially susceptible to such attacks, as the repercussions extend beyond personal trauma to societal consequences.
As we navigate the ever-evolving landscape of technology, the protection of women’s privacy must remain a top priority. Deepfake technology, if left unchecked, has the potential to inflict irreparable damage on the lives of countless women. It is our collective responsibility to advocate for change, demand legislative action, and work towards creating a digital environment where privacy is respected, and the dignity of individuals, especially women, is upheld. Only through concerted efforts can we hope to preserve the sanctity of privacy in the face of this formidable technological challenge.

Addressing the impact of deepfake technology on women’s privacy requires a multi-faceted approach:

Governments and legal bodies must work collaboratively to enact and enforce stringent laws specifically targeting deepfake creation and distribution. These laws should carry severe penalties for offenders, serving as a deterrent against the malicious use of this technology.

Legislation and Regulation:

Technology Countermeasures:

 Investing in technology to detect and combat deepfakes is crucial. Advancements in AI-driven tools for deepfake detection can help social media platforms, websites, and law enforcement agencies identify and remove such content promptly.

Media Literacy Programs:

Educating the public, especially women, about the existence and potential risks of deepfake technology is paramount. Media literacy programs can empower individuals to recognise and report fake content, reducing the spread of harmful narratives.

Responsibility of Digital Platforms:

Social media and content-sharing platforms must take proactive measures to detect and remove deepfake content swiftly.


Submit a Comment

Your email address will not be published. Required fields are marked *