LCI Learning
Master the Art of Contract Drafting & Corporate Legal Work with Adv Navodit Mehra. Register Now!

Share on Facebook

Share on Twitter

Share on LinkedIn

Share on Email

Share More


Index 

•    Synopsis
•    Introduction

  • Understanding Deepfakes and how they work.
  • The growing threat of deepfake tech.

•    Legal Grey Area

  • Why aren’t they explicitly illegal yet?
  • The challenges in defining and regulating AI generated Deepfakes

•     Forgery Laws and Deepfakes

  • Can existing forgery laws be applied to deepfake content?
  • Limitations of traditional legal frameworks

•     Liability of AI Developers and platforms

  • Should tech companies be held accountable?
  • The role of AI developers and posting platforms in curbing misinformation

•     Deepfakes under IT Act 2000

  • Examining Section 66D, 66E and 67
  • Are these provisions sufficient to address the harm?

•     Deepfakes and Electoral Laws

  • The role of misinformation in influencing voters
  • Challenges in detecting and preventing election-related deepfakes

•     Ownership of face and voice
•     Future amendments and legal developments
•    Deepfakes in international Criminal Law
•    Psychological risks and fake memories
•    Solutions and Preventive Measures 
•    AI as a solution
•    Global comparison and best practices
•    Towards a Deepfake- specific legislation for India 
•    Landmark Cases
•    FAQs
•    Conclusion

Synopsis 

The Deepfake technology— powered by artificial intelligence, has revolutionised the era of digital content creation on one hand but on the other hand it also poses as a serious ethical and legal challenge. Right from political misinformation and identity theft to defamation and fraud, Deepfakes are threatening both individuals and institutions alike. Despite of the rise in their misuse , India lacks specific legal framework in order to combat them. Instead of having a rigid framework to tackle them, India is relying on outdated cyber and forgery laws. This article is seeking to explore the grey areas that surrounds the Deepfake technology, their possible impacts on elections, concerns regarding protection of data surrounding the Deep, fake tech, the liability of the tech companies and the possible future potential amendments that are needed in the spectrum of Indian law. This article also compares global regulatory approaches in order to highlight the best practices that are going around the world and how these best practices can be adopted by India in order to curb down the deep fake related crimes.

Introduction 

To give a brief introduction about the Deepfake technology— Deepfakes are synthetic media, these consist typically of videos, images or audios that are created with the help of artificial intelligence in order to manipulate or replace a person’s appearance, voice or their actions. The term “deep fake” comes from the words “deep learning”, which is a subset of AI which uses neutral networks in order to analyse and re-create realistic looking content. 

This technology has multiple uses for various purposes that include entertainment, satire and education. However, every upside also has a downside and here, this useful technology is also raising serious ethical concerns as it can also be exploited to spread misinformation, steal someone’s identity, create a political manipulation and join hands with more of other cyber crimes.

This technology has emerged as a very powerful yet dangerous tool in our digital age. While this technology has a lot of creative and innovative applications to offer, their misuse can lead to concerns about misinformation, fraud, identity theft, and various other cyber crimes.

In India, there is no direct point of legislation that regulates Deep fake technology, this leaves the victims with a limited legal resources to their exposure. The existing laws such as the IT Act, IPC and Data Protection Laws help in providing partial remedies but also fail to comprehensively address the whole situation of Deepfake related crimes. 

As the technology is advancing, the potential for the Deepfake tech to distort our reality and influence the perception of people is growing and this growth is making it imperative to explore more legal solutions and preventive measures in order to combat this digital threat.

Legal Grey Area

Despite of the dangers that are posed by this technology, India lacks a dedicated legislative system that addresses these issues. The current laws, including those on defamation, forgery, and cybercrime-provide a limited protection against the harm related to Deepfake. The absence of a clear set of rules and regulations makes it more difficult to prosecute the offenders in an effective manner. 

The law enforcement agencies often struggle to divide the deepfake crimes into categories as they do not always fit neatly into the existing legal definitions. In addition to this, the concerns over free speech and censorship complicate the potential legal response. 

Without having an explicit legal provision, the victims of Deepfake-misuse are facing challenges in getting justice. The urgent need for DeepFake- specific laws is very evident now, as the AI generated content continues its evolution.

Why aren’t they explicitly illegal yet?
Deepfakes aren’t explicitly illegal in most countries because of multiple reasons, some of them being— the legal framework is lagging behind technology and new laws often take time to adapt to new technological changes. However, deepfake technology has evolved very rapidly which is making it difficult for the lawmakers to create a specific regulation for this evolution in real time. 

Another reason that is delaying in making it illegal is that legitimate uses do exist of this technology. Deepfake has its applications based in entertainment, education and accessibility, for example in film industry, AI generated voices for disabled individuals, et cetera. A broad spectrum ban could hinder with the possible innovation and the beneficial uses that this technology can provide and currently is providing.

Existing laws cover some misuse and this might be another reason as to why there is a delay in making this technology explicitly illegal because many restrictions are relying on the existing laws to deal with the crimes related to this  technology —crimes like defamation, fraud, harassment and copyright infringement, are being used to address deepfake related crimes instead of creating new Deepfake specific legislation. 

Another possible reason could be the challenges that might occur in enforcement, identifying and proving malicious deepfakes is definitely difficult as they can be created and distributed secretively and therefore the legislators might hesitate to criminalise deepfake without having any clear enforcement mechanisms in place. 

Apart from these few reasons, there is also a factor of balancing free speech and regulation. In different democratic countries, overly broad laws relating to deepfake pose a risk of infringing upon the free speech and creative form of expression that makes governments cautious about making strict regulations.

These reasons exist. However, there are some countries that are starting to introduce Deep fake specific laws for their people. For example, the United States has laws the target Deep fake pornography and election related manipulations and China has regulations that requires clear labelling of AI generated content. A lot more nations are expected to be following this pursuit as the deepfake becomes the larger societal concern.

The challenges in defining and regulating AI generated Deepfakes 
The challenges that are being faced in defining and regulating this technology is primarily challenging due to several key factors-one of them being definitional ambiguity. What qualifies as a deep fake? Distinguishing between a harmless, AI generated content (like movie special effects) and malicious defects. (For example, fake political speeches) is difficult. The intent matters, a deep fake created for the use of satire is different from the one that will be used for fraud or defamation. Crafting a legal definition that takes into account  their intention and harm is really complex.

Another key factor would be the rapid, technological advancements, Deepfake technology is evolving really fast. The legal framework are moving slowly as compared to the Deep fake technology, which is making the laws outdated quickly. The AI generated content is becoming more and more sophisticated and is getting harder to detect which is complicating the enforcement.

The challenges that are faced for enforcement involve anonymity and global reach. The Deep fake can be generated and spread across borders anonymously, which makes it really hard to track the offenders of this crime, identifying the source of a Deepfake requires very advanced forensic equipment, which are not always at hand and accessible to the law enforcement.

Deepfake involves multiple legal domains such as privacy, defamation, cybercrime, intellectual property infringement and national security, which makes the regulation a multifaceted challenge. Some of the cases can be addressed using the existing laws. While other cases might need an entirely new form of legislation.
Overly strict laws could stifle with artistic and journalistic applications of AI generated content. Some deepfakes like parodies or any recreation of history are protected under free speech in many jurisdictions.

AI generated content is often indistinguishable from the real content, which makes it hard for the viewers to tell which one is real and which is fake. Water marking and tools for detection exist, but they are not full proof and merely bad actors can remove or bypass them.

There are international regulation gaps which is another factor. There is no global consensus on this technologies laws which leads to inconsistency across countries. Some of the countries are using deep fake technology for their propaganda, which is making international cooperation difficult on another level.

Forgery Laws and Deepfakes

Forgery laws in India, like in sections 464 and 465 of the Indian Penal Code —criminalises the creation of false documentations with the intentions to deceive. However, application of these laws into the Deepfake crimes remain complex. Unlike the physical documentations, the Deepfake technology manipulates digital media and blurs the lines between reality and fabrication. 

The courts may struggle to decide whether deepfake constitutes false documentations under current definitions. Moreover, deepfakes do not always result in tangible harm, this also makes prosecution very difficult. While in some cases, the crimes could be covered under cyber fraud or identity theft law. There is still a pressing need to make amendments into the existing forgery laws in order to include AI generated deceptive content explicitly .

Can existing forgery laws be applied to deep fake content?
Existing forgery laws can sometimes be applicable to the Deepfake content. However, their effectiveness will be determined by the type of Deepfake—its intent and the legal framework of the given jurisdiction. There are situations wherein the forgery laws may be applicable in addressing a deep fake related crime and on the other hand, there are situations where they will not be applicable . Let’s start with the applicability.

In case of fake documents and signatures-if a deep fake has altered official documents, contracts or IDs (such as fake signatures or manipulated identity proofs, which are AI generated), the existing forgery laws that have prohibited falsification of documents may be applicable in such situations. 

Secondly, in cases of financial and corporate fraud, the Deepfake videos or audios, which are used in order to impersonate executives (example, CEO, fraud scams), or for any financial approval can be prosecuted under fraud and forgery laws. 

Thirdly, in case of misrepresentation in legal matters-if a deep fake is used in order to create false evidence during court proceedings, then it could be treated as forgery or perjury under the law that criminalises the fabrication of evidence. 

Fourthly, in case of identity, theft and deception-AI generated Deep fake content that are used to impersonating a person for deceptive gains (For example, fake customer service calls or AI generated Ransom demands) will fall under identity theft laws or fraudulence laws which often have provisions related to forgery.

Limitations of traditional legal frameworks
The limitations of traditional legal framework start where we face situations where the forgery laws may not be enough for instance, in cases of defamation and reputational damage-many Deep fakes target individuals however, defamation laws and not traditional forgery laws will be better suited to address such cases. Secondly, in case of political information spread by Deep fake. A deepfake, which is designed to spread false information regarding politics, for example, fake speech by a political leader these crimes may not always fit under the forgery laws and specifically if it does not involve any falsifying documents. Thirdly, in case of satire and parody-some mistakes are curated, primarily for the purpose of entertainment, parody or satire. The forgery laws generally need an intent to deceive or cause harm so they may not apply to these deepfakes that are harmless, artistic expressions in such cases. Fourthly, in case of lack of tangible forgery-traditional forgery laws are focused on falsified, physical or digital documents. Meanwhile, the Deep fake videos and audio files might not always fit neatly into these existing legal brackets of forgery. Therefore, while forgery laws can address some deep fake related crimes, but they are not yet a comprehensive solution. Many  jurisdictions are introducing new Deep fake specific laws in order to cover areas that the existing laws do not fully address.

Liability of AI Developers and Platforms

The role of AI developers and social media platforms in the deep fake proliferation is very controversial. Should the tech companies be held responsible for the misuse of their technology? AI developers have constantly argued that their renovations have legitimate uses while the critiques demand stricter oversight. The platform hosting Deepfake content, like the social media sites are facing scrutiny for not implementing stricter detection measures. While we have section 79 of the IT act which provides intermediaries with a “safe harbour” defence, this increases the pressure calls for tighter and more stringent regulations. Many European countries have introduced AI liability rules, which is prompting India to consider a similar preventive measure to tackle the Deep fake related harm.

Should tech companies be held accountable?
There are many arguments encounter arguments for and against holding the tech companies accountable some of the arguments that stand by holding these companies accountable argue that, they provide the tools-many Deep fake technologies are generated and deployed by AI companies, if their tools are not used properly and used for harmful purposes like misinformation or fraud, et cetera. Then they may have to bear some responsibility.

secondly, there should be a content moderation, responsibility. Social media and technology platforms are serving as primary distribution channels for the Deep fake technology, if they fail in detecting and preventing harmful Deep fake content, then they could be seen as facilitating the spread of harmful Deep fake content. just how the tech companies are held accountable for hate speech, child, exploitation, content and data privacy violations. Similarly they can be expected to regulate Deep fake misuse or be held accountable for the same.

The role of AI developers and posting platforms in curbing misinformation
Companies that develop Deep fake tech technologies can implement safeguards like watermarking the content, detection algorithms, and ethical AI guidelines that prevent miss use. Their failure to do so, can be seen as negligence if Deep fix cause serious harm, for example, interfering with the elections, causing financial frauds, et cetera, then the tech companies can face lawsuits or regulatory action for failing to mitigate the possible risks.
These must collaborate with the governments and research researchers in order to assure that transparency, accountability and user awareness is achieved. A balanced approach is innovation with responsibility and that is the only key to mitigating the spread of deceptive AI generated content while safely preserving the progress of technology.

Deepfakes Under the IT Act, 2000

India’s IT act 2000 consists of provisions that apply to Deep fix. However, their effectiveness is highly debatable. Section 66D penalises impersonation through electronic communication, section 66E protects privacy by criminalising and unauthorised capturing of images and section 67 regulates obscene content, however, These laws still failed to address the broader spectrum of implications that the Deep fake cause such as political misinformation or reputational damage. The IT Act is lacking a direct provision that criminalises AI generated deceptive content, but it leaves a legal loophole. Strengthening the IT Act or introducing new laws explicitly targeting the Deep fake , could help India tackle this growing digital threat a lot more effectively.

Examining section 66D, 66E and 67

While section 66 D, 66 E and 67 of the IT act, 2000 gives some sort of legal framework to handle deep, fake related crimes, they are not entirely sufficient in order to address the whole spectrum of harm that Deep fake can cause.
Section 66D can help in cases of impersonation using IT resources-it can be used against cases of financial frauds and cases of scams that involve Deepfakes, such as the AI generated impersonation cases of extortion or fraudulence. Section 66E can help in cases of violation of privacy, it covers Deep fake pornography and unauthorised use of private images. This section can help offer legal recourse to the victims. section 67 can help with the cases of obscene content, criminalises the creation of distributing of explicit AI generated content which can be useful in the cases of non-consensual, Deep fake pornography.

Are these provisions sufficient to address the harm?

While the above mentioned sections might be enough for those particular cases, however, they do false short when faced with different scenarios. in situations where there are no direct recognition of Deep fake technologies, then the sections don’t explicitly define the fake related offences in those situations which makes prosecution more difficult. The existing laws were not drafted which AI generated content kept in mind, which is what led to the gaps in enforcement. Secondly, in situations where misinformation and political manipulation are not fully covered, Deepfakes used for different political, propaganda, misinformation or defamation (example faking politician speech) might not nearly fit under the sections 66D 66E or 67. there is no clear provision allotted for AI driven miss information except under broader aws like the IPC section 505, which deals with statements that create public mischief. Thirdly, in situations that lakh provisions for consent and harm mitigation, there are no clear legal mechanisms that exists for the victims to demand, immediate takedowns or compensation for the damages caused by the Deepfakes . There are no specific penalties allotted for AI developers or the platforms that failed to prevent the misuse.

Deepfakes and Electoral Laws

Deep faces have the potential to manipulate opinion of public and disrupt a democratic process. For example, in case of elections, doctored videos, canvas represent political figures, and spread false narratives, which will influence the voters behaviour. While Indian electoral laws, such as the representation of the people act 1951 prohibits misinformation, they do not specifically address any AI generated fake. The election commission of India has issued several guidelines against misinformation. However, the enforcement still remains a challenge. Countries like the United States and China have been imposing bans on Deep fake election interference. India must consider a similar legislative safeguard in order to protect its democracy, integrity from the deepfake driven political propaganda.

The role of misinformation in influencing voters 
Misinformation plays a key role in shaping the perception of the voters, their decisions and the outcome of the elections. these factors often undermine the democratic processes. The misinformation spreads through social media, Deep, fake technology and the business of new sources which influence people and their public opinion in multiple ways.

The public perception is manipulated by these false narratives about candidates or their policies, this can distort reality by swaying the voter preferences based on the misinformation, which was spread rather than the actual ground facts.

Character assassination is another facet of the misinformation spectrum, fake news can falsely depict politicians into controversial situations that may damage their reputation and affect their electoral chances.

Misinformation about the procedures of voting, for example, false election dates, incorrect eligibility rules can create confusion or even discourage the voters impacting their participation and suppressing the voter turnout.
The fake news gives rise to social and political divides, which reinforces the ideological bubbles were in. The voters can only consume the biased and misleading content. This creates polarisation and division while amplifying social and political differences.

Governments and the external entities often use misinformation campaigns to manipulate elections. This is an example which was seen in 2016, US election and Brexit referendum.

AI generated videos can falsely make a portrait of a leader that makes them say statements they’ve never said which will mislead the voters and alter the political discourse for that individual or party.

Inflation continues to threaten the democracy by distorting informed decision-making. This is making the regulatory and public awareness efforts extremely essential now more than ever.

Challenges in detecting and preventing election related Deepfakes 

The rise of AI generated Deep fix pose as a prominent threat to elections by manipulating public opinion and spreading this information. This undermines the trust of people in democratic institutions. Detecting and preventing such content is challenging due to several factors.

Detection challenge is the first factor, advanced AI makes detection very difficult. Modern defect technology is capable of producing highly realistic videos which makes it hard to differentiate real and fake content. There is a lack of robust detection tools, which is why Deepfake detection is lagging behind AI generation and also, the tools that we have at our exposures often struggle with real time identification. Disguised and low quality Deepfake is also what makes detection really challenging. Some deepfakes use subtle manipulations like edits which use audio-only, very minor facial tweaks and modifications in order to avoid getting detected.

Prevention is also a key challenge in this era of rapid spread via social media-Deep fake can go viral before they are even detected, which miss leads millions before the content has been taken down and other takedown measures are implemented. What makes preventing even challenging is the anonymity of malicious actors. Many deep fake creators, use VPNs and encrypted channels or fake identities, which makes it difficult to trace and penalise them. Many countries still lack specific laws against political Deepfakes , this limited legal framework is what makes enforcement, inconsistent and reactive rather than it being preventive. Another important factor that makes prevention difficult is the lack of public awareness. Voters often lack the media literacy to identify Deepfakes which makes them more susceptible to deception.

Ownership of Face and Voice

Who owns your face and voice in this digital age? Deepfake raises a critical question about data protection and consent. The Digital Data Protection Act 2023 is aimed to safeguard personal data, but it does not explicitly cover Deep fake related identity misuse. Unauthorised use of someone’s likeness can lead up to reputational and financial harm yet the victims often lack legal recourse. 

Some jurisdictions recognise personality rights and grant individuals control over their image and voice. India may need to introduce similar provisions like that, in order to ensure that individuals have the right to consent before their likeness is used in AI generated content.

Future Amendments and Legal Developments

As Deepfake is becoming more and more advanced, the legal frameworks must evolve alongside. The policy makers worldwide are considering amendments to address AI generated misinformation. India could also introduce laws that mandate watermarking of AI generated content and criminalises malicious Deepfakes while also imposing liability on the platforms that failed to regulate these contents. The proposed amendments to the IT Act or IPC could redefine Deepfake as a distinct offence. Moreover, establishing a dedicated regulatory body that oversees AI ethics and digital rights might also be necessary. As the global legal responses are emerging, India must proactively develop less legislation in order to combat deepfake related crimes and safeguard technological progress and free expression at the same time.

Deepfakes in International Criminal Law

Deepfakes are increasingly used in cyber crimes, frauds, and political interference. International legal bodies such as Interpol and UN are now recognising Deepfakes related threats. However, the enforcement still remains inconsistent in this regard. Many countries like China and the United States have introduced deep fake specific laws while the EU is working on stricter AI regulations. Cross-border Corporation is of the essence to combat deep fake crimes as the offenders often operate beyond the national jurisdiction. Developing an international legal framework which is similar to those that govern cyber crimes can help mitigate the risks which are post by AI generated deception. India should collaborate with global partners in order to address deep fake related challenges effectively.

Psychological Risks and Fake Memories

Deep fake, don’t just deceive the public, but they can also alter the memory perception of a person. The studies have suggested that people who see AI generated content may develop false memories of events that have never happened. In legal cases, manipulated videos or audio evidences can mislead the jurors and influence the verdict wrongly. 

The psychological impact of Deep fake have extended beyond the grasp of misinformation which is now raising ethical concerns about the use in media and entertainment industry. As the Deepfakes become more and more realistic, differentiating between truth and fiction will become increasingly more difficult. Public awareness and media literacy programmes are important now more than ever in order to help individuals be equipped with the skills to critically analyse AI generated content so that they can tell the difference between real and fake.

Solutions and Preventive Measures

In order to combat Deepfake ,what is required is a multifaceted approach. Governments must enact stringent regulations while the tech companies should invest in AI driven detection tools. Digital literacy programs will be of the utmost help to citizens in order to recognise and question deepfake content. Fact checking organisations play a crucial role in verifying information. However, their efforts need support from government and corporate sector. 
Blockchain technology may provide a solution by verifying the authenticity of the digital content. Encouraging ethical AI development and promoting responsible AI use are also vital to the preventive measures. A combination of these legal, technological and educational measures are necessary in order to curb down the Deep fake related harm.

AI as a Solution

Ironically, in this AI driven crisis-an AI might be our best defence against  deepfakes. Machine learning models can be trained in order to detect a generated content by analysing the inconsistencies in the facial expressions, the speech patterns and also pixel anomalies . However, Deep fake creators continuously improve their techniques which makes detection a constant challenge. The researchers are exploring watermarking methods and digital fingerprinting to authenticate real content. 

AI driven fact checkers and detection tools should be continuously updated to keep up with the evolving Deep fake technologies, the deceivers are not resting, so the preventers should not be resting either. Collaboration between government, tech companies and academia is of utmost essence to develop a more effective AI driven solution against deepfakes.

Global Comparison and Best Practices

Many countries have enacted, deepfake laws which offer valuable lessons for India. China has mandated disclosure of AI generated content while the US has laws against the election interference. The EU’s AI Act has included provisions to regulate harmful AI applications. Singapore is using AI detection tools to counter deepfake misinformation.
India can learn from these models by generating and developing a balanced legal structure that addresses deepfake related crimes and supports AI innovation at the same time because the balance needs to be maintained if we have to grow while also curbing the problem down. A comparative analysis of global legal responses can be of help to India, that way India can craft a robust policy that effectively mitigates the deepfake related threats.

Towards a Deepfake-Specific Legislation for India

To effectively tackle deepfakes, India has to develop a dedicated legal and well structured framework that has inclusions of provisions that criminalises malicious deepfake creations and makes transparency mandatory for the AI generated content. Apart from these, the platforms should also be held accountable.

A regulatory body could be set up that overseas Deep fake related cases and establishes guidelines for ethical uses of AI. Public-private collaboration is an essential key that will develop robust policies. As the technology continues to evolve, India’s legal system must adapt and keep the pace with this evolution to ensure that both digital security and innovation is kept in place.

Landmark Cases 

1. Anil Kapoor vs. Simply Life India and Ors.(2023)

Bollywood actor Anil Kapoor filed a lawsuit against different platforms for unauthorised use of his persona through artificial intelligence generated deepfakes.

The Delhi High Court granted an ex-parte ad-interim injunction in this case, restraining the defendants from using Anil Kapoor’s name, image, voice, or any other aspect of his persona without his consent.
This case highlighted the judiciary’s proactive stance in protecting individuals against the miss use of deepfakes.

2. Rajat Sharma vs. Tamara and Ors. (2024)

There was a petition filed by Rajat Sharma who is editor in chief of India TV and advocate Chaitanya Rohila, regarding growing concerns about deepfakes.
The court directed the central government to consult with Deepfake technology providers, telecom service providers, victims and intermediaries before finalising the recommendations for detecting and removing Deepfakes.
This move emphasised the need for a collaborative approach to tackle the deep fat problem.

3. ANI vs. Open AI (2024)

ANI accused open AI for using content without its permission.
Indian news agency, ANI sued open AI accusing the ChatGPT creator that it was using its published content without its permission to help train the artificial intelligence chat bot.

ANI was the latest news organisation who took open AI to court following its global lawsuits in US by newspapers, including the New York Times and the Chicago Tribune.

The first hearing in this case took place in New Delhi High Court, where the judge issued notice to open AI to give a detailed response to the accusations of ANI

The court filing contained emails that was sent by open AI‘s lawyers in India to a NI saying the Indian news agencies website had been placed on an internal block list ever since September, which sees its usage of its content in future training of AI models.

ANI, however has argued that it’s published works are permanently stored in the memory of ChatGPT, and there is no programmed deletion.
The Delhi High Court is set for another hearing of this case in 2025

4. People vs. Tracy (California 2020)

This case dealt with the non-consensual defect pornography production and distribution.
In this case, the court upheld the California AB 602 Law that says that there has to be more stringent legal boundaries to tackle the infringement of privacy.

FAQs

1. What are Deepfakes ?

Deep fakes are AI generated media, for example, videos, images, or audios, that are used to manipulate or replace a person’s likeness, convincingly, and is often used for spreading misinformation, fraud or Entertainment.

2. Are Deepfakes illegal in India?

As of now, India does not have any specific laws that criminalised Deepfakes. However, there are existing provisions under the IT Act, IPC and Defamation laws that may be applicable to Deepfake crimes.

3.Can it be prosecuted under forgery laws?

Forgery laws under the Indian Penal Code Section 463 and 464, can be used if a deepfake is intended to cause harm to an individual’s reputation or to deceive an individual.

4. What Indian laws address Deepfakes?

IT Act’s Section 66D which deals with impersonation, 66E which deals with violation of privacy and Section 67, which deals with obscene content along with several other IPC provisions on defamation and cheating can be invoked to address Deepfakes.

5. What global laws regulate Deepfakes?

The United States has the Deepfake Task Force Act, the EU regulates AI generated content under the Digital Services Act and as for China, it has strict disclosure requirements.

Conclusion 

Deepfakes are standing at the cross borders of innovation and deception while offering both ground breaking possibilities and also dangerous consequences. As AI generated content becomes more and more sophisticated, the line between reality and fabrication is blurring out and this is raising pressing concerns about the truths of the reality, people’s consent and the accountability of the platforms that provide the services. While India and many other countries are dealing with the legal loopholes present in their laws, the urgent need for Deepfake specific legislation is undeniable and the misuse of this technology in politics, financial fraud and personal defamation is what makes it the necessity of the hour. All the Deepfake related crimes are not just a hypothetical threat, but an already happening reality. 

However, the legal battle against deepfake is not entirely lost. There are advances in the AI detection tools and regulatory frameworks that are helping. Public awareness campaigns are also forming a robust defence against malicious Deepfake misuses. The question is not about prohibiting deepfake, but also about balancing the innovation that has given birth to it and with responsibility. Can AI be trained to detect and counter its own lies? Can legislation evolve fast enough to keep up with the technological advancements? These are the questions that lawmakers, technologies and the society must collectively answer.

At the end, Deepfakes are a testament of both power and peril of artificial intelligence. Whether they transform into an equipment for creativity or a weapon for deception, depends on how rapidly and efficiently we act. In this era of AI-generated deception, the future of truth itself is at stake.


"Loved reading this piece by Vanya Garima Kachhap?
Join LAWyersClubIndia's network for daily News Updates, Judgment Summaries, Articles, Forum Threads, Online Law Courses, and MUCH MORE!!"






Tags :


Category Others, Other Articles by - Vanya Garima Kachhap 



Comments