LCI Learning

Share on Facebook

Share on Twitter

Share on LinkedIn

Share on Email

Share More


Key takeaways

  • History of torts and growth into the digital age
  • Intentional cyber torts in social media platforms
  • Torts on e – commerce platforms with the example of the European Union
  • Unintentional torts committed because of usage of artificial intelligence.

Introduction:

Torts are an aged concept that have founded many principles of civil law, the ever-dynamic state of civilisation has commanded that tort laws adapt to cater to the requirements of modern society. One such adaptation is the assumption of digital transformations into tort law. With technology intertwining with our daily lives and permeating all intersocial spaces, it is imperative to its function that tort law provides redressal and justice mechanisms for wrongs that are committed even through a screen. 
As a result, torts have expanded into the digital realm, becoming known as cyber torts. Such torts are apparent in intentional torts that occur across social media platforms (like cases of identity theft, defamation, harassment, stalking, etc.), in e – commerce platforms (when substandard or illegal goods are sold via an online marketplace) and in the usage of AI in daily provision of services (including healthcare and transportation). It is the aim of this article to touch upon these categories of torts in technology and explain the complications to tort law technological advancements have made worldwide.

Law of Torts and Tortious Liability:

The law of torts is one of the most paramount components of civil law. Torts, coming from the French word for ‘wrong’ are civil wrong that address the redressal options for someone who has suffered a loss or an injury at the hands of a wrongdoer – the tortfeasor. While the origination of tort law can be dated as back as the 12th Century through the concept of writs, the organised and distinct system of tort law was administered in 1932, post the infamous case of a snail in a bottle, in Donoghue v. Stevenson. 
For there to be a tort, there must be:

  1. A duty of care from the tortfeasor to the plaintiff;
  2. A breach in that duty of care;
  3. A legal injury to the plaintiff and
  4. A causal link between the actions of the tortfeasor and the injury.

Torts can be separated into three categories: 

  1. Intentional torts, which are deliberate actions done by the tortfeasor to injure another. Examples include assault, battery, false imprisonment, and defamation.
  2. Unintentional or negligence-based torts, which arise when a tortfeasor fails to show reasonable care in a way that result in the harm of another, such as in claims of medical negligence.
  3. Strict liability, which is also known as no fault liability, wherein the intention and fault of the tortfeasor is irrelevant. Such torts may arise when the tortfeasor commits an act that this deemed to be dangerous from the outset, assuming a doctrinal approach to establish torts and is apparent in cases involving product liability and hazardous activities, though subject to certain defences.

When a tortfeasor is in commission of a civil wrong in one of the categories above, he owes a tortious liability to remedy the injured party and return him to the position he would have been at if the tort was not committed. This is key to differentiating tort law from criminal law, which is implemented in cases where a compensation does not suffice, and hence, punitive measures must be effectuated against the wrongdoer.

Cyber torts:

In their premise, cyber torts are the same as any tort committed offline, in that it requires the breach of an existing duty of care which causes harm to another. What makes it a cyber tort is that the platform to intentionally commit a wrongful action or the harm suffered is online. These cases may present themselves as defamation cases, wherein malicious material about another is posted via digital networking platforms, in online retailing, wherein online sellers are selling harmful products to consumers or defrauding them, or in cases of cyberstalking and harassment.
In the Indian backdrop, the emergence of cyber tort is still novel and upcoming. The Indian Parliament enacted the Information and Technology Act as recently as in 2000, which addresses a variety of civil wrongs that are done digitally. The case of the State of Tamil Nadu vs Suhas Katti [State of Tamil Nadu vs. Suhas Katti (C No. 4680 of 2004] was one of the first major cases to be tried under the 2000 Act. This was a case of cyberstalking, defamation, and harassment, directed from the defendant, Mr. Suhas Katti, towards a female plaintiff. The defendant used the online platform, Yahoo, to send explicit and unsolicited messages to the plaintiff, and create a fake account under her guise. Resultantly, the defendant was found guilty in breach of Section 67 of the IT Act 2000. This case was treated like any other case of stalking, harassment and defamation that was seen before, despite it being on a digital platform, showing a step forward into the recognition of torts occurring online.
Despite the courts determining digital torts like any other torts and recognising their equal maliciousness, it is important to note that the state of the tort being online poses considerable complications for the legal systems everywhere.
The very possibility to commit cyber torts has expanded the frequency of such wrongs. With the internet being open to everyone, especially in a densely populated country such as India, it is now possible to commit torts through a simple click and target multiple people at once. As easy as it is to commit the tort, it is equally feasible to erase the traces of civil wrongs in digitised format. Not only does this complicate matters for legislation, but also slows down the judicial process and costs of procuring evidence. 
It introduces courts to types of evidence that are novel to them and begs a separate determination whether this novel format is an acceptable piece of evidence. Furthermore, it is impossible to create an exhaustive list of online material that may be considered as a tortious instrument, as technology is growing beyond the limits of understanding at a rapid rate. With a causal link between the tort and injury being instrumental to a tort claim, these complications in obtaining and submitting evidence prove a roadblock and bar the justice systems.
Another complication to consider arises in cases involving data privacy and information. Many users of online networking platforms and e-commerce websites are unaware of the extent of usage of their digital footprint in the personalisation of the content they consume, for example, using cache and cookies. While the consumers may not actively consent to the usage, processing, or storage of their information, they often unwittingly consent to these actions by giving into passiveness and inertia. 
While any such case may be treated as stalking or fraud off the computer, the same actions, done digitally, escape the notice of consumers. Hence, they are unable to even realise there is a potential harm suffered by them. Such instances escape the radar of judicial process, and even if tried, only are so treated if realised and actioned by someone with extensive knowledge of digital data collection operations, consumer protection laws and IT laws.

Torts in E-Commerce platforms

One finds tortious claims extensively in online shopping platforms, especially relating to the quality of goods and who must be held liable in the sale of a faulty, illegal, or harmful product. Product liability stems out of the principles of negligence and strictly liable torts, ascertaining that parties in the supply chain owe a duty of care to the consumers for the products they sell to meet minimum standards and not cause injury, and are not unlawful from the outset.

Digitalisation has shifted marketplaces onto websites, with platforms like Amazon and Flipkart showing great popularity among consumers. With this, the concept of product liability has also shifted to accommodate any trade done on these platforms, both in tort law and consumer protection law. Not only do these markets mean that physical products are sold online, but also render products that are themselves digital. Various jurisdictions have started to recognise the need to include digital transactions into their laws on product liability, including India and the EU.
Likewise, courts have begun to consider the nuanced differences of online platforms and online sales in the decision to hold someone liable for products being sub – standard or defective. There is now a recognition of the various kinds of injuries that a consumer can suffer online, which he may not have done before in face – to – face transactions, that suffice to bring claims of tort, for e.g., unconsented data collection and exposure to malware which put consumers at risk of privacy invasion.
In India, consumer rights have been strengthened by implementation of the Information and Technology Act (2000) and the Consumer Protection Act (2019), which recognised anyone involved in online transactions as a consumer who could bring action against the manufacturer or trader of a faulty good, as well as strengthening the informational rights of a consumer to safeguard them against fraud.
The laws of the European Union in the digital market have shown to efficiently tackle the complexities digital marketplaces have introduced in trading through a consolidated legal framework on consumer protection laws. With the European Union being harmonised in its consumer protection laws, it creates a singular economic trading area known as the internal market. Not only does this facilitate physical cross – border trading with the European Economic Area, but also ensures that digital transactions, which are much more likely and easy to be carried out interjurisdictionally, can be adjudicated through a common legal framework. 
This circumvents the need to assess which laws should apply, the traders or the consumers. In so, it is easier to bring claims for tortious liability under strict liability or negligence by recognition of a common standard of care, even when the trade is digital. Some important regulations that the EU has implemented to recognise these standards is the Product Liability Directive (1985) and the Digital Services Act (2022).
One of the key considerations to make in determining product liability for online sales is what is the role of the platforms facilitating such sales, this is called platform liability.
Generally, worldwide Acts related to consumer protection, would hold intermediary platforms to be exempt of liability, like the Product Liability Directive had effected. However, the Digital Services Act coming into effect in 2022 slightly increases the threshold of responsibility onto such digital marketplaces, stating that such intermediaries can only be exempt on the condition that they do not present themselves as the first source of the product, or appear to have authorised the product. 
Furthermore, they must also show “due diligence” so that no illegal goods or services are sold through their platform. It also assigns such marketplaces with “notice and takedown” obligations, wherein the platform must act on the notification of the sale of unlawful, defective, or harmful goods on their platform.
In India, the Consumer Protection e – commerce rules were affected in 2020 so that e – commerce platforms exercised more caution in the presentation and evaluation of the goods listed on their platforms to avoid liability.

Torts and Artificial Intelligence

Before exploring the use of AI in ways that could lead to tortious liability, it is important to recollect that the principles of negligence, product liability and strict liability, as already introduced, are key. 
In addition to the previously introduced principles, adjudication in tort claims resulting from AI usage also employs the principle of ‘vicarious liability’. This arises when the employer is held liable for the injury causing actions of his or her employee, as long the action is done within the course of the employment. This cannot arise when the scope of the actions is outside of the expected duties of employment.

Such a principle was made considering very human interactions, cognisant of the dynamics between a powerful employer and his contracted employee, and this is why vicarious liability is challenging to ascertain in the case of AI.
AI can act independently, with minimum input from the beneficiary of its services, and is under no contract. This makes it very different from a regular employment situation, meaning that it may not always be appropriate to impose the rule of vicarious liability or treat AI the same way as one does an employee. 
While the failure to upkeep an AI system would amount to breach of duty of care and negligence, any system failure beyond that is an outcome of product defect and would be more appropriately handled if at least partial liability is assigned to the creator of the AI program, recognising their negligence and failure to meet reasonable standards under product liabilities.
Liabilities in the self-driven motor vehicles
Because India has no separate statute for self-driven motor vehicles, it accommodates a strict liability on the car owner under the existing Indian Penal Code. This is based on how liabilities are imposed upon car owners when an injury arises from human negligence. Effectively, this would absolve the car manufacturer of any liability, even though the injury was a result of miscalculation by the artificial intelligence system provided by said manufacturer, i.e., a defective good or service.
In the UK, the Automated and Electric Vehicles Act 2018 imposes the same liability.
An important dimension that the law fails to recognise in both jurisdictions is that it considers the injury only as a tortious claim of negligence, rather than that of product liability.
In a famous case in Arizona in 2018, a woman was killed in an accident caused by a self - driving Uber truck. Despite recognised system failures, Uber was not charged for the accident under product liability for the development of a faulty system, returning the liability to human error by human negligence on part of the truck operator.
Liabilities in technological medical negligence
Medical malpractice cases, under a tortious claim for negligence, arise when doctors fail to meet their duty of care towards a patient. This does not mean that the doctor must affect failproof and successful treatments, only that the doctor must conduct and treat the patient in a standard that is deemed reasonable across practitioners. 
The use of technology, via AI assisted diagnostic and surgical systems, reduces the chances of being negligent. This is because there is increased speed and accuracy in detection, surgical precision, and pre – emptive determination of future complications. This would offset any complications arising out of negligence from the outset.
At the same time, given that negligence is determined by a uniform reasonable standard of care across the practice, should there arise a claim for negligence, it would be higher for defendants to satisfy the standard of care expected. This is because the assistance of AI has increased the threshold of reasonable practice, which perhaps a practitioner still devoid of this facility would also be subjected to. 
Additionally, doctors are likely to make errors while learning how to operate or collaborate with such systems, making them likely to fail their duty of reasonable care and be liable for tort of medical negligence.
Furthermore, there is always the possibility of system errors which could induce malpractice claims, say if the error in system misdiagnoses. 
Studies have also shown that AI diagnostic systems tend to show a bias in terms of gender and race. This is because AI generated data is based on algorithmic processing of data input. Medical studies and tests have predominantly been conducted on Caucasians; therefore, the input underrepresents diverse populations, and ultimately the AI generated output fails to cater to them as well. 
Like automated vehicles, this makes one ask if the system developer is partly to blame, under a claim for product liability.
In the case of Mracek v. Bryn Mawr Hospital [“Civil Action No. 08-296.” Mracek v. Bryn Mawr Hospital, 610 F. Supp. 2d 401, (E.D. Pa. 2009)], the District Court of Pennsylvania rejected the plaintiffs claim in negligence under product liability. The plaintiff had suffered injuries due to a system failure, resulting in harm to the plaintiff. However, courts found no causal link between the injuries and the malfunction, dismissing his motion.

Conclusions:

To conclude, while the advancement of technology has undoubtedly streamlined the provision of goods and services across the globe, the regulation on the same remains fragmented. 
Present laws in India display a sense of tech neutrality, wherein they attempt to accommodate complicated and adapted human and technological interactions into existing legal mindsets, frameworks, and principles, as we could see in the laws on self-driving vehicles and evidence procurement for privacy related cyber torts. This also manifests in spaces across the globe, especially in matters related to AI, wherein authorities fail to recognise the differences between a human employee and an AI system. 
Hence, this causes courts to rule in favour of orders derived simply from the principles of vicarious liability and negligence, as opposed to holding creators of such AI systems also partly responsible by an approach of product liability. 
This would best be complimented with a strict liability imposed on those developers responsible for creating systems that deal with practices requiring extreme care, e.g., healthcare.

FAQ’s:

1.    What is the difference between strict and vicarious liability?
Strict liability holds an owner responsible for holding something or carrying out something that is so dangerous that he is inherently liable (such as unlawful goods). Vicarious liability arises when such thing is not inherently dangerous, but an error causes injury to someone else. This error must be made in the course of employment by an employee, but the employer would be held liable.

2.    Is it ethical to hold humans responsible for the actions of AI systems, which are autonomous?
One can argue that the maker of the AI system has a duty to provide reliable systems to his customers, who can then provide responsible facilities to the end consumer. There is a shared responsibility. However, highly trained and regarded professionals, such as medical practitioners using AI, must at least show adequate care in the maintenance of such AI systems so that they can assist them responsibly.


"Loved reading this piece by Anusha Dharnidharka?
Join LAWyersClubIndia's network for daily News Updates, Judgment Summaries, Articles, Forum Threads, Online Law Courses, and MUCH MORE!!"






Tags :


Category Others, Other Articles by - Anusha Dharnidharka 



Comments


update