Share on Facebook

Share on Twitter

Share on LinkedIn

Share on Email

Share More


Progress in artificial intelligence, machine learning, and the analysis of vast amounts of data is set to revolutionize the legal and judicial processes. In the past five years, advancements in machine learning-based artificial intelligence have enabled the creation of self-governing systems that closely resemble human decision-making. This is particularly evident in the development of autonomous vehicles—which essentially operate as self-driven machines, navigating through traffic at high speeds, based on the decisions made by human drivers in the past. These technologies are increasingly being applied in any field where there are extensive collections of past decisions, and law is certainly one of them.
Research in artificial intelligence and law is well-established, with notable contributions dating back to Thorne McCarty's work on automating decisions in US taxation law in 1972. However, the initial research in artificial intelligence and law, as well as the prevailing approach until recently, focused on symbolic systems. These systems represent law as a set of rules, cases, or arguments within a computer, and the decisions generated by these systems are interpretable by humans. The more recent research in deep learning systems, also referred to as layered or convolutional neural networks, has utilized extensive datasets to model intelligent behavior. This has led to a significant improvement in the accuracy and independence of artificial intelligence software, as well as the creation of systems that exhibit intelligent behavior, albeit not in a manner that is human-like.
These systems are expected to transform every aspect of law, but the area where data-driven artificial intelligence will have the most immediate and profound impact on the law is within the criminal justice system. This is due to a variety of economic, technical, and social factors, but the fundamental insight is that the criminal justice system is primarily based on predicting human behavior.

CURRENT APPLICATIONS OF AI IN THE JUSTICE SYSTEM

Since 2021, the Supreme Court has been employing an AI-controlled tool for information processing with the purpose to provide data for the consideration by the judges. The nation state does not independently determine those policies. Other IT tools adopted by Supreme court of India are SUVAS (Supreme Court Vidhik Anuvaad Software) that translates legal documents from English and vernacular languages and vice versa. 
In Jaswinder Singh & Ors Vs State of Punjab the Punjab & Haryana High Court dismissed a bail application on the basis of the averments made by prosecution that the petitioner had committed a brutal act of killing. Then, one day, the presiding judge asked ChatGPT to provide broader opinions about a case of granting bails in cases of cruelty. However, it is crucial to point out that the use of ChatGPT in this instance does not imply an evaluation of the merits of the case or even a statement about it at all – or else, the trial court will not give any attention to it. This reference was just for the purpose of having a look at the bail jurisprudence when cruelty is involved. 

USAGE OF AI IN THE JUDICIARY: COMPARATIVE ANALYSIS 

1.    USA 
For instance, COMPAS (Correctional Offender Management Profiling for Alternative Solutions) helps the judges predict the risk through facets like criminality, social and economic status and mental health to name a few of them. The US Sentencing Commission also applies the use of artificial intelligence in formulating and implementing the guidelines on the appropriate and reasonable sentencing. 
In the U. S court system, the courts have integrated chatbots to provide information regarding inquiries, court processes, calendars and any other related information to the society. It also takes a load off the working of court staff and make information easily retrievable to the public as well.
2.    China
China ‘Smart Court’ system assists the judges with artificial intelligence when it comes to deciding the case and recommending the laws and past judgements of similar cases. It can also suggest the right sentences based on the similar cases, thus assist the judges to sit, make decisions and deliver justice much in the shortest time possible. 
One example is the ‘China Judgements Online’ platform with the help of artificial intelligence, a judge is able to reach a document quickly. 
3.    UK
The UK Ministry of Justice launched the ‘Digital Case System’ for the crown courts in 2020. It provides timely access to case details and general hearings while enabling persons to attend court proceedings virtually and makes it possible for electronic filing of documents in order to reduce the use of paperwork. The guidelines were provided by the Bar Council’s Ethics Committee to the criminal law barristers to enable them to access the online portal. 

LEGAL FRAMEWORK TO REGULATE AI: VIEWS OF THE GLOBAL ECONOMY AND INDIA 

The opportunities of AI for society are numerous, with reference to healthcare, education, transport and other aspects of life as well as entertainment. However, there is concern associated with AI for example, ethical issues, privacy infringement, bias, discriminations and insecurity among others. 
In the light of these challenges and risks there is a new voluntary version of safety guideline for artificial intelligence products by a group of global AI experts and data scientists. The World Ethical Data Foundation has 25000 members among which is staff, tech companies including Meta, google and Samsung among others. It comprises 84 questions which should be answered by the developers at the initiation of an AI project. 
But with the current increase in the use of AI, there is a need to develop separate law that governs Artificial Intelligence, for eradicating inherent or learned prejudice and for dealing with moral issues as and when AI is used. 
There are White papers, guidelines and policy in jurisdictions like UK, USA and EU that focuses on the algorithmic assessment and eradicating of biases within algorithms. Recently, the European Parliament made amendments to the draft of an Artificial Intelligence Act. The amendment aims at enlarging the list of prohibited application of AI technology, or biometric surveillance unless for law enforcement purposes after obtaining a judicial warrant and for generative AI systems like ChatGPT to indicate that the content is generated using AI technology.

INDIAN PERSPECTIVE

At the time of writing, there is no actual legislation in India governing the use of artificial intelligence. The Ministry of Electronics and information Technology (MEITY), is the executive branch for anything AI and had formed committees to put in place policy for AI. 
The Niti Ayog has established both problematic and possible strategic set of seven’ responsible Ai principles’ these are: safety and dependability, equality and non-discrimination, inclusion, privacy and Security, Data transparency, Checking arid balancing for accountability, Protection and enhancement of positive human values. Thus, the Supreme Court and the high courts which are court of record also have the constitutional duty of protecting and enforcing of the fundamental rights which include the right to privacy. In India the laws regarding data protection are relatively new but the main statute that governs in this area is the Information Technology Act coupled with its rules. Also, to protect the rights of citizens, MEITY has proposed the Digital Personal Data Protection Bill, which, however, has not been enacted formally. If this bill passes into law, persons will have the right to ask how data has been collected from them by organizations and government agencies, as well as how the data was processed and stored. 

RISKS AND CHALLENGES IN THE LEGAL SECTOR

  1. Confidentiality and data privacy 
    AI systems in their typical practice are often based upon big amounts of data to learn from and to predict upon. Such data may incriminate personal data or financial data of the company, or any other type of data that was recorded. Machine learning algorithms that depend on such data for training purposes might bring some issues for the organizations to face data protection regulations.
  2. Bias in AI systems 
    Bias in the AI systems while training can be evidently seen in the results of the model. That is, the outcomes obtained by using AI can be racism in race caste and gender, ideology, and History-based results and hence reflect social injustice.
  3. Licensing And Issues of Responsibility Or Risk Taking 
    Another problem is that unlike trained attorneys, an AI system does not require to obtain a legal license to practice and, therefore, will not be regulated by ethical standards and codes of professional conduct. Who will be liable for the legal advice that an AI system gives out when it is inaccurate or misleading? Who has the right of deciding, the developer or the user? 
    This is true even where judges maintain the discretion of decision making as in the case of usage of Ai in the judiciary. This some of the time brings about automated bias where one becomes overly dependent on technology- based recommendations. 
    According to a news report, a New York lawyer employed ChatGPT to conduct legal research and ensconced six case citations in a brief to court. But again, opposing counsel could not locate any of them and the lawyer had to say that he personally never verified whether the case was a legitimate one. The judge sanctioned the concerned lawyers’ and the law firm for which they practiced was penalized to pay $5,000 in total. Hence, the generative AI should be approached with great care by lawyers when they are used for legal research.
  4. Concerns regarding competition 
    Another factor is that, because of their self-learning feature, AI can run without tie to its coders or actual programmers. But this may lead to a new kind of divide – technological or economic – the ramifications of which have not been explored enough. These differences could lead to abuse of data and might, therefore, interfere with the structure created by The Competition Act 2000. 

PREDICTIVE POLICING

The application of artificial intelligence (AI) in making predictions, particularly in the context of prior decisions, is an area where modern AI systems excel. Consequently, the field of criminal law presents a particularly suitable domain for the integration of AI systems. The judiciary has intentionally adopted a flexible approach to these standards, acknowledging the vast array of considerations and variables that influence criminal cases, as well as the historical deficiency in the necessary tools for accurately evaluating the precision of predictions made by law enforcement and the judiciary. As a result, "state actors have been compelled to rely on their own subjective judgments and anecdotal evidence in the formulation of their predictions."
This reliance on subjective judgments and anecdotal evidence is no longer the case. The advent of data-driven machine learning systems is set to revolutionize every facet of the criminal justice system. In the pre-trial phase, these systems are being utilized to forecast the likelihood and location of criminal activities, as well as to guide decisions regarding the monitoring, arrest, search, and charging of suspects, and the determination of whether to indict them. Early approaches to what is often referred to as "predictive policing" systems often relied on uncurated historical data, which inadvertently perpetuated discriminatory practices based on race and socioeconomic status. This has sparked widespread criticism and debate regarding the limitations and potential dangers of predictive policing, serving as a catalyst for the development of ethical AI and the push for fairness, accountability, and transparency in machine learning.
However, the scope of machine learning extends beyond predictive policing. During the parole and sentencing phases of criminal proceedings, data-driven systems are currently in use to assess the likelihood of reoffending, and this assessment is set to play an increasingly significant role in providing guidance to judges during the sentencing phase. The assessment of the risk of reoffending is also a critical factor at the bail stage of the criminal justice process. Although bail, sentencing, and parole decisions occur at distinct stages with varying objectives, the overarching criterion for determining whether a defendant should be imprisoned is community safety. In essence, this involves evaluating the risk that the defendant will commit a serious offense in the foreseeable future. Should there be a substantial risk of such an occurrence, the defendant is likely to be denied bail or parole, and in the context of sentencing, they may be subjected to a lengthy prison sentence. Risk assessment tools, which utilize the reoffending patterns of other offenders and specific traits of the defendant, are extensively employed in many jurisdictions to inform parole decisions, and their use is expected to expand in sentencing contexts. It is within these decision-making processes that artificial intelligence is poised to make the most significant impact in the near future. In the context of bail, AI will not only evaluate the risk of reoffending but also predict the likelihood of the defendant absconding.
Therefore, the primary determinants of bail and parole outcomes are on the verge of being exclusively determined by computational systems. Furthermore, there exists a growing demand for the automation of sentencing processes through the use of artificial intelligence (AI) systems. The extensive potential for AI integration within the criminal justice system is poised to deliver significant societal advantages, yet it also raises a multitude of ethical concerns. As previously mentioned, critics of predictive policing have highlighted issues such as the automatic incorporation of systemic biases, the lack of transparency in algorithmic processes, and the appropriate attribution of responsibility for biased decisions. More broadly, there is apprehension regarding the ethical implications of entrusting machines with the task of making autonomous decisions that affect individuals' lives.
Sentencing, parole, and bail decisions are particularly contentious issues. The anticipated advancements in data availability and access to AI technologies will empower political advocacy groups, politicians, executives, and legislative bodies to leverage recidivism and sentencing prediction systems to further their political objectives, often at the expense of judicial officers perceived as too stringent or, more commonly, too lenient on criminal matters. This development poses significant challenges for the judiciary and is likely to intensify the pressure faced by judges.
These developments present complex and concerning challenges. However, there are a variety of strategies that can be implemented to ensure a justice system that operates within the framework of a world increasingly dominated by AI and data. The judicious and deliberate application of this technology could potentially uphold fairness and equality in decision-making processes, thereby fulfilling the constitutional mandate of equal protection under the Equal Protection Clause. Achieving this, however, will necessitate a profound understanding of both the data and the algorithms involved. Indeed, AI has the capacity to mitigate some of the more problematic aspects of decision-making by law enforcement, prosecutors, and judges. There exists a substantial body of literature from fields such as cognitive science, social psychology, sociology, and criminology that demonstrates how limitations in human decision-making can lead to various forms of injustice. AI techniques offer the potential to counteract these issues if utilized correctly.

BIAS AND FAIRNESS CONCERNS

  1. The Issue at Hand and Its Implications
    The concepts of unfairness, bias, and discrimination consistently emerge as critical issues, identified as significant challenges (Hacker 2018) in the context of algorithm usage and automated decision-making systems. These systems are pivotal in various domains, including healthcare (Danks & London 2017), employment, credit, criminal justice (Berk 2019), and insurance. In August 2020, demonstrations were held, and legal actions are anticipated concerning the controversial algorithm employed to grade GCSE students in England (Ferguson & Savage 2020).
    A focus paper from the European Agency for Fundamental Rights (FRA 2018) delineates the potential for discrimination against individuals through algorithms. It emphasizes the necessity of adhering to the principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union, when applying algorithms in daily life. The paper highlights instances where discrimination is likely to occur, such as the automated selection of candidates for job interviews and the use of risk scores in credit assessments or trials.
    A report by the European Parliament on the fundamental rights implications of big data, published by the European Parliament in 2017, underscores the potential for big data to infringe upon fundamental rights, including privacy, data protection, non-discrimination, security, and law enforcement. It notes that the data sets and algorithmic systems used in assessments and predictions could lead to violations of individual rights and result in differential treatment and indirect discrimination against groups with similar characteristics, particularly in areas such as fairness and equality of opportunities in education and employment. The report calls upon the European Commission, Member States, and data protection authorities to identify and implement measures to mitigate algorithmic discrimination and bias. Furthermore, it advocates for the development of a robust and unified ethical framework for the transparent processing of personal data and automated decision-making, which could guide data usage and the enforcement of Union law.
  2. Proposed Solutions and Addressing the Issue
    Various strategies have been proposed to address these critical issues. For instance, the European Parliament (2017) suggests conducting regular evaluations of data sets to ensure their representativeness and absence of biased elements. Additionally, technological or algorithmic modifications to counteract problematic bias, the integration of human oversight (Berendt, Preibusch 2017), and the adoption of open algorithms are recommended. Efforts are also underway to develop certification schemes for algorithmic decision systems that do not exhibit unjustified bias. The IEEE P7003 Standard for Algorithmic Bias Considerations is one such initiative, aimed at providing a framework for developers to avoid unintended, unjustified, and inappropriately differential outcomes for users. Moreover, the development of open-source toolkits and the establishment of standards, such as the IEEE P7003, are crucial steps towards addressing these issues effectively. For instance, the AI Fairness 360 Open-Source Toolkit serves as a resource that enables users to scrutinize, report, and mitigate instances of discrimination and bias within machine learning models throughout the entire lifecycle of an AI application. This toolkit employs a comprehensive set of 70 fairness metrics and utilizes 10 advanced bias mitigation algorithms, all developed by the research community.
  3. Gaps & Challenges 
    However, there are notable gaps and challenges associated with this approach. While the law is clear in its regulation and protection against discriminatory behavior, it is acknowledged that it may not fully address all aspects. A study by the Council of Europe in 2018 highlighted these shortcomings, including the law's failure to cover areas not explicitly protected against discrimination by legislation, as well as the creation of new forms of differentiation that can lead to biased and discriminatory outcomes. The implementation of human-in-the-loop approaches may also encounter challenges regarding the appropriate application and circumstances. There are instances where the involvement of humans may not be feasible or desirable, such as situations where there is a risk of human error or incompetence that could result in serious or irreversible consequences. Additionally, there are concerns regarding the adequacy of notifying the use of human-in-the-loop in technologies that employ them. The mere transparency of algorithms does not necessarily make them more comprehensible to the general public, and there are also issues related to the exposure and discoverability of private data, which introduces its own set of concerns.
    The House of Commons in 2018 emphasized the need for a holistic, interdisciplinary, scientifically-grounded, and ethically-informed approach to algorithmic auditing for it to be effective. While the technical solutions proposed represent significant progress, there have been numerous calls for increased regulatory, policy, and ethical attention to fairness, particularly in safeguarding vulnerable and marginalized populations as highlighted by Raji and Buolamwini in 2019.

CASE STUDIES AND ILLUSTRATIONS OF ARTIFICIAL INTELLIGENCE IN CRIMINAL JUSTICE SYSTEMS

In recent years, there has been a proliferation of instances where artificial intelligence (AI) has been integrated into criminal justice systems globally, including the United States. A notable example is Northpointe's COMPAS algorithm, employed by numerous criminal justice agencies to forecast the likelihood of a defendant's reoffending. However, concerns have been raised regarding the algorithm's accuracy and fairness, with some studies indicating a potential bias against African American defendants. Another pertinent example is the PredPol predictive policing tool, utilized by many police departments to pinpoint areas likely to experience crime. Critics argue, however, that there is scant evidence to support the claim that PredPol actually diminishes crime rates, suggesting instead that it may result in the disproportionate policing of minority communities. These case studies underscore the dual nature of AI's potential to enhance criminal justice systems while simultaneously highlighting the critical ethical and legal considerations that must be meticulously addressed.

The Application of Facial Recognition Technology in Law Enforcement
The integration of facial recognition technology within law enforcement holds the promise of significant advancements. This technology empowers law enforcement agencies to swiftly and accurately identify suspects, a capability that is invaluable in the resolution of criminal cases. Facial recognition technology allows for the rapid scanning of thousands of images, accurately matching faces with a high degree of certainty. This capability is particularly advantageous in the pursuit of repeat offenders or individuals involved in large-scale events such as protests. Nonetheless, the deployment of facial recognition technology within law enforcement is not without ethical considerations. A primary concern is the potential for misidentification, which could lead to grave consequences, including wrongful arrests. Furthermore, the technology's capacity to collect and store extensive data on individuals without their explicit consent raises significant privacy concerns. Consequently, it is imperative that law enforcement agencies implement robust safeguards and regulations to ensure the judicious use of facial recognition technology.

ALGORITHMIC BIAS REFLECTING HISTORICAL INEQUALITIES

Bias within artificial intelligence, also known as machine learning bias or algorithm bias, pertains to the phenomenon where AI systems generate outcomes that reflect and perpetuate societal biases, including those rooted in historical and contemporary social disparities. This bias can manifest in the initial training data, the algorithms employed, or the predictions these algorithms produce.
When bias remains unaddressed, it impedes individuals' participation in economic and societal spheres, while also diminishing the potential of artificial intelligence. Businesses are unable to reap the benefits of systems that yield distorted results, thereby fostering mistrust among various marginalized groups, including people of color, women, individuals with disabilities, the LGBTQ community, and others.

The origins of bias in artificial intelligence
The eradication of bias in artificial intelligence necessitates a thorough examination of datasets, machine learning algorithms, and other components of AI systems to pinpoint the sources of potential bias.

  1. Training data bias
    The foundation of AI systems' decision-making processes lies in the training data they are exposed to, making it crucial to scrutinize datasets for any signs of bias. One approach involves reviewing the representation of various groups within the training data. For instance, a facial recognition algorithm that predominantly represents white individuals may lead to inaccuracies when applied to people of color. Similarly, security data collected in areas with a high African American population could introduce racial bias into AI tools utilized by law enforcement.
    Bias can also arise from the labeling of training data. For example, AI recruiting tools that employ inconsistent labeling or disproportionately focus on certain characteristics may inadvertently exclude qualified candidates.
  2. Algorithmic bias
    The use of flawed training data can result in algorithms that consistently produce errors, unfair outcomes, or even amplify the inherent biases present in the flawed data. Algorithmic bias can also emerge from programming errors, such as when developers unintentionally skew the weighting of factors in algorithm decision-making based on their own conscious or unconscious biases. For example, the use of indicators like income or vocabulary by an algorithm may unintentionally discriminate against individuals of a specific race or gender.
  3. Cognitive bias
    The processing of information and the making of judgments are inevitably influenced by our experiences and preferences, leading to the incorporation of these biases into AI systems through the selection of data or the way data is weighted. For example, cognitive bias might lead to a preference for datasets collected from Americans, thereby overlooking the importance of diversity in data sources.
    According to the National Institute of Standards and Technology (NIST), this form of bias is more prevalent than one might assume. In its report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), NIST highlighted that "human and systemic institutional and societal factors are significant sources of AI bias as well, and are currently overlooked." Addressing this challenge will require a comprehensive approach that considers all forms of bias. This entails expanding our perspective beyond the machine learning pipeline to understand and investigate how this technology is both created within and impacts our society.

LACK OF TRANSPARENCY IN AI DECISION-MAKING PROCESSES

This has an implication on the foundation of traditional legal and regulation systems that are anchored on human decision making; because of the Black Box attribute of AI. This issue, relevant to all branches of AI applications, as it concerns the lack of clear reasoning that could be easily followed, intensifies the problem of determining who is to blame and/or to be held responsible for the negative consequences and systematic risks associated with AI decisions and actions. 
Global legislations and guidelines relating to AI are gradually shifting their attention towards fundamental aspects of transparency, bias, accountability and systemic risk employing to develop a robust financial system that does harness the opportunities offered by AI while simultaneously handling the challenges. Leading these efforts is the European Union’s AI Act, opens new tab which seeks to regulate the ‘black box’ aspect of AI systems, particularly in the financial industry. 
Primarily addressing the potential risks related to high-risk AI applications, this act under development should get the green light in the UK Parliament in April 2024. The act establishes a multidimensional legal framework for safe use of artificial intelligence in a number of sectors,as well as for encouraging fair AI practices and increasing the level transparency and accountability for artificial intelligent systems. 
With regards to its approaches, it brings up a pyramid model for AI management, which is based on risk-based categorization, strict rules of disclosure and control, and human supervision of AI algorithms’ decision-making. This legislative structure therefore aims to bring transparency in the AI decision-making procedures, to ensure that focalization on enhancements within AI are aligned with the societal norms and values. 
The US is moving to a responsible kind of artificial intelligence through self-regulation and voluntary compliance, done under the framework laid in the extensive and visionary Executive
Order, opens new tab promulgated by the Biden administration in October 2023, which promotes disclosure without imposition of restrictive rules. 
At the same time, the UK, in its July 2023 white paper, opens new tab supports a nuanced balance between innovation and risk. It underlines the post of explainability and making less prejudice the AI’s and AP’s and follows the need for the regulation that will foster innovation while stressing the necessity of the global cooperation. 
All these collective strides highlighted the need to harmonize the adoption of stringent regulations with innovation and ethic policies and responsibility for AI development across the regions.

DUE PROCESS AND CONSTITUTIONAL RIGHTS

Lon L. Fuller posits that the decision of a judge "declares rights, and for rights to be meaningful, they must, to some extent, remain steadfast in the face of changing circumstances." A thorough examination of historical precedents is crucial in the adjudication of rights for three primary reasons. Firstly, there exists an expectation that judges should draw upon legal history when addressing novel issues. Ronald Dworkin argues that "A judge's duty is to interpret the legal history he finds, not to invent a better history." Dworkin contends that in the adjudication of new cases, each judge must view themselves as a "partner in a complex chain enterprise of which these innumerable decisions, structures, conventions, and practices are the history." According to Dworkin, in such novel cases, a judge must "advance the enterprise in hand rather than strike out in some new direction of his own." Similarly, H. L. A. Hart asserts that "when particular statutes or precedents prove indeterminate, or when the explicit law is silent, judges do not just push away their law books and start to legislate without further guidance from the law." Therefore, it is imperative to scrutinize historical precedents.
Secondly, the concept of justice is often deeply ingrained in societal traditions and conscience, earning it the respect of being considered fundamental. This suggests that historical context significantly influences the determination of what constitutes fundamental values and should be safeguarded with fundamental rights.
Thirdly, there is a prevalent belief that constitutional rights are intrinsically linked to historical developments. In 1872, the Supreme Court of the United States noted that the "true meaning" of the Amendments to the Constitution is "connected to the history of the times." Dworkin elaborates that when a legal document, such as a constitution, is part of the doctrinal history, the interpreter of this document must deliberate and choose, "as a question of political theory," which legislative intention is "the most appropriate one." Thus, there is a doctrinal history associated with constitutional rights. This raises the question of why there appears to be a hesitancy towards extending constitutional rights to artificial intelligence. It is likely due to the fact that conferring constitutional rights to technology is not a part of the doctrinal history of constitutional rights. A respect for doctrinal history may lead to a reluctance in extending constitutional protections to artificial intelligence.

THE "BLACK BOX" PROBLEM IN AI DECISION-MAKING

In essence, Black Box AI operates under the foundational principle of 'machine learning,' wherein it is meticulously trained on extensive datasets to facilitate decision-making or prediction. Within the realm of 'machine learning,' an algorithm is subjected to an extensive array of data, subsequently being trained to discern patterns and features within this data. However, complexity emerges when these algorithms begin to yield accurate predictions.
The intricate layers of computations obscure the methodology or calculation that culminates in the precise result, thereby encapsulating a 'black box' environment where the trajectory to the final decisions remains concealed. Consequently, this opacity raises concerns regarding transparency, as it is impossible to discern how an AI system arrives at its decisions. Numerous applications of Black Box AI exist, ranging from facial recognition for suspect identification to predictive policing, medical diagnosis for disease detection, autonomous vehicles, and fraud detection within the financial sectors. In each of these applications, the process through which decisions are made is not comprehensible to humans.
Challenges Faced by Blackbox AI: Lack of Transparency and Unintended Consequences
The inherent opacity of these systems presents significant challenges when the decisions rendered by AI systems lead to unforeseen or undesirable outcomes. Various scenarios illustrate these issues, such as instances where an autonomous vehicle collides with a pedestrian, facial recognition by AI results in wrongful arrests, or systems fail to identify diseases. Moreover, AI systems can develop biases stemming from the preconceptions present in the training data or from biased assumptions made during the algorithm development phase. These biases can manifest in situations of racial bias, as exemplified by COMPAS, where white offenders were inaccurately classified as 'low risk' compared to their black counterparts, or in healthcare systems where black patients were erroneously deemed healthier than equally ill white patients, or sexism, as demonstrated by Amazon’s recruitment algorithm and Google’s advertising system, where resumes were not sorted with gender neutrality, preferencing male candidates.
The opacity of these systems exacerbates these challenges by obstructing explainability. This lack of transparency complicates efforts to understand the rationale behind a particular decision, thereby hindering the correction of errors and the implementation of effective remedies. Such opacity undermines the widespread adoption of AI technologies by eroding user confidence and fostering skepticism.
The Imperative for AI Transparency
AI transparency encompasses the openness and clarity regarding the decision-making processes, operations, and behaviors of AI systems. It is a critical element in fostering trust in AI systems and providing users and other stakeholders with a heightened sense of assurance that the system is being utilized appropriately. Furthermore, transparency facilitates the identification and rectification of biases and discriminatory patterns that may emerge within AI algorithms.
The necessity for transparency has been acknowledged by the National Institution for Technical Innovation and Development (NITI Aayog) in its comprehensive three-part document titled "Towards Responsible AI for All," wherein the 'Principle of Transparency' is delineated for the responsible governance of Artificial Intelligence (AI). This principle is defined as follows: "The design and operation of AI systems should be documented and made accessible for external examination and audit to the extent feasible, ensuring that the deployment is conducted in a manner that is fair, honest, impartial, and upholds accountability." Furthermore, the decisions of the Supreme Court have been referenced, where it has been stated that "transparency in the decision-making process is paramount, even for private entities." This is in alignment with the Constitution's guarantee of accountability for all actions of the state towards individuals and groups.
The importance of non-biasness is also highlighted in the document, with the stipulation that "AI systems must treat individuals under similar circumstances relevant to the decision equally," under the 'Principle of Equality,' and this principle is related to Article 14 of our Constitution.
Moreover, transparency is recognized in legislative measures, as exemplified by the forthcoming EU AI Act, where it is recognized as a foundational principle applicable to all AI systems under Article 4a. The Act states, "AI systems shall be developed and utilized in a manner that ensures appropriate traceability and explainability, while also making individuals aware that they are interacting with an AI system and adequately informing users about the capabilities and limitations of that AI system, as well as informing affected individuals about their rights." Similarly, Article 13 of the Act holds a similar stance, stating that "high-risk AI systems shall be developed and designed in a manner that ensures their operation is sufficiently transparent, enabling providers and users to reasonably understand the system's functioning."
The EU's General Data Protection Regulation (GDPR), with its focus on data protection, further underscores the necessity of transparent AI systems. This is evident in Article 15, which mandates that individuals be informed about the use of automated decision-making (i.e., AI systems) and "meaningful information about the logic involved." Such information about the logic is only possible if the system is transparent. Various other national frameworks and conventions also emphasize the requirement for transparent and explainable AI. This is evident in Singapore's Model AI Governance Approach, where the guiding principles include transparency, explainability, and fairness. Similarly, the USA's Blueprint for an AI Bill of Rights is in harmony with the need for a transparent and accountable AI in the private sector.
Transparent artificial intelligence (AI) systems are essential for comprehending the decision-making processes, particularly in the event of undesirable outcomes or biases. Legal and regulatory frameworks underscore the importance of transparency. Nonetheless, achieving transparency within BlackBox AI systems presents significant challenges. Moreover, the regulation of AI transparency could potentially hinder innovation, creating regulatory obstacles for emerging entities. Enhancing transparency may also hinder AI efficiency and progress, without a definitive assurance of success.
An alternative strategy involves placing the responsibility on users to oversee AI decisions and mitigate unpredictability. Instead of mandating transparency, users are encouraged to monitor AI decisions to ensure their rationality. The concept of vicarious liability, such as respondeat superior (holding the party responsible for its agents), is particularly relevant when AI is considered an agent of its user, as it makes decisions to assist its users, especially in autonomous critical contexts. In scenarios where the risk of external failure is high, such as in interconnected markets or medical procedures, users or creators should assume broader liability for the AI's opaque decision-making.
A potential solution lies in the adoption of explainable AI (XAI) techniques, including Local Interpretable Model-Agnostic Explanations (LIME), Shapley Additive Explanations (SHAP), and Descriptive Machine Learning Explanations (DALEX). These methods can render the decision-making processes of AI systems understandable to humans by developing another AI designed to achieve the same, thereby bolstering trust in the outputs generated by such systems.

IMPACT ON THE LEGAL PROFESSION AND ACCESS TO JUSTICE

Given the extensive volume of data generated by legal professionals, the integration of Artificial Intelligence (AI) into legal processes holds the potential to enhance efficiency and accuracy in predicting case outcomes. Furthermore, AI can identify patterns and biases within the judicial system, thereby contributing to a more equitable and just legal framework.
Therefore, it is imperative for India to persist in incorporating AI into its legal procedures to augment access to justice and maintain the contemporary relevance of its legal system in the 21st century.
Predictive Case Analysis: AI provides legal practitioners with access to a wealth of data, analyzes historical cases to identify pivotal legal arguments, and offers more precise predictions regarding the resolution of specific cases. This advancement has facilitated lawyers in performing their duties more efficiently, thereby achieving superior outcomes for their clients.
Enhancement of Justice: Through the use of AI tools and automation, the incidence of errors within the legal system can be reduced. This not only minimizes the likelihood of errors but also elevates the overall quality of justice delivered.
Acceleration of Legal Processes: AI has the potential to expedite and simplify legal procedures in India. Its capability to swiftly sift through extensive datasets and identify trends could significantly benefit a nation grappling with a backlog of cases and a complex judicial system.
Unbiased Judicial System: By identifying biases and errors within the legal system, AI contributes to the realization of more equitable and just outcomes.
The advent of AI-powered legal tools has simplified the communication between individuals without legal expertise and legal professionals. These tools streamline processes, reduce the time required for legal research and compliance analysis, and facilitate more effective dialogue between parties.
Moreover, the application of AI in legal data analysis could revolutionize the Indian legal business landscape. The integration of AI into legal systems could enhance the efficiency of lawyers and judges, automate legal studies, and expedite the review of extensive legal data.

CONCLUSION

Holding people to account for technology mistakes in the legal profession can be a delicate process. The consequences of mistakes made by AI systems are going to be aeons massive in their impact that they would seize the life and liberty of people. Nonetheless, several preventive measures can be observed and employed at the legislative level or by expert in law or any other practice that will guard against the manipulative nature of AI by drawing a line between who is responsible for what when employing the use of AI in their practice. 
This should however be taken as an affirmation to the fact that AI should augment the work of lawyers and not replace them. Whereas AI has the ability to make repetitive and time-consuming processes easier, it has no ability to make strategic decisions, legal interpretation and legal advice. 
Ultimately, it is the lawyers who are liable for the work they produce and they have to guarantee their clients’ safety. Therefore, although AI helps the law firms to become efficient it cannot replace the experience and knowledge of the lawyer.

FREQUENTLY ASKED QUESTIONS

1.    What is artificial intelligence? 
Artificial intelligence or AI is a technology which enables machines to perform intelligent activities in comparable to human intelligence, for example, understanding a language, identifying a thing in a picture, or making a choice. There are several types of AI, including:There are several types of AI, including: 
 
Weak (or narrow) AI: this is the type of currently existing AI. As of now, she is very much specialized and capable of accomplishing a certain type of jobs including machine translation and speech recognition. She could not perform tasks which have not to do with her qualifications. 
Strong (or general) AI: It is the AI that would be as intelligent as the human being, capable to comprehend any intellectual job which the human can do. Yet, we are not there yet, and scholars are actively on the task of creating such a kind of AI. 
Super-intelligent artificial AI: This is a hypothetical type of AI which of course would be smarter than humans and might be able to even develop or upgrade itself. This is actually a subject of discussion and introspection among researchers and developers in the field of AI. 
2.    How does AI work? 
AI is performed by the employment of digital techniques to analyze information and to come up with intelligent solutions to problems in the same manner as that of a human brain. Here’s how it works in a simplified way:Here’s how it works in a simplified way: 
Data collection: AI starts at the data gathering stage where AI gathers data in forms of images, texts or videos or any other kind of data. The more data the better because there are really quite a number of information that could be inputted in an AI for it to analyze and understand. 
Data preprocessing: The source data which is employed in the process is purged from any errors prior to use. It will also guarantee the quality of collected data to be the best for analysis.
Machine learning: The techniques used in AI include artificial neural networks and the most common being a machine learning system. AI in its turn aims at using the machine learning algorithms to analyze data and come up with some patterns. In the AI realm, two main categories exist deep learning and the machine learning. Machine learning enables the AI to learn from the data given to it without having to be programmed, while deep learning uses deep artificial neural networks in order to perform complicated tasks such as image recognition as well as natural language processing. 
Decision making: It can also take decisions or initiate actions according to the information that it has gleaned from the data in the process of its learning. For instance, a chatbot AI can respond to questions with the resources to it; probably through what it has been trained on. 
Continuous improvement: It should also be noted that the AI can be trained continually; that is, the system can be updated repeatedly. The more data and information are fed to it, the more komplex and precise the performance of the tasks at hand is. 
Artificial intelligence is of various forms and its classification depends in this option on the above features.  

  • supervised AI: it learns from data that has been preceded by the labels assigned to them by humans
  • unsupervised AI: actively seeking for models on no labeled data
  • AI by strengthening: for it learns by trial and error as it makes different responses to stimuli in the environment.

3.    How can we be sure that artificial intelligence is used ethically while protecting our personal data?
Ethical concerns with regard to the application of artificial intelligence is central. For AI to be used in the right manner and our personal data kept secure the following measures must be put in place. 
First, it will be imperative to design algorithms which have adherence to ethical principles to meet par pertains to this subject. This means that they need not to discriminate, to be bias or violate the privacy of the individuals. These practices can be regulated by forming some legal authorities the so-called regulatory bodies. In the case of personal data protection, there is need to have legal frameworks in place to guarantee that users’ information is well managed. 
Therefore, the ethical design of AI must be done alongside the proper regulation in order to safeguard the privacy of the users. This will enable you to gain from the use of AI while at the same time being able to contain for seen risks. 
4.    Will AI replace man? 
The arguments toward a direction, as to whether AI totally can replace man, is a topic that arouses controversial opinions among the specialists. A lot of techniques of AI are created to assist people rather than to replace them in many aspects. For instance, chatbots and virtual assistants can help reduce the amount of drudgery that people do to help them involve more of their time in creative things. 
AI is very good at specific and well-describe tasks, however, lacks general adaptability and understanding of human context. Demands remain forceful in the context of its irregularity and unsuitability to various situations, and this is precisely why people cannot be replaced in many industries: people are capable of learning quickly. 
However, AI also brings new job opportunities for the development, management, and regulating of such technologies in performing some tasks. Besides, the decision made by introducing Artificial Intelligence is calculated by algorithms and data, while people entail given legal, moral and social factors while performing their actions. It does not have moral judgment and empathy as a human being has. 


"Loved reading this piece by prangya paramita jena?
Join LAWyersClubIndia's network for daily News Updates, Judgment Summaries, Articles, Forum Threads, Online Law Courses, and MUCH MORE!!"






Tags :


Category Others, Other Articles by - prangya paramita jena 



Comments


update