A New Era for AI-Assisted Legal Work: Ethical Considerations & Challenges

Friday, June 21, 2024 by KLD Team

Recent advancements have highlighted the enormous potential for AI in law to automate common processes, increase efficiency, and help draft documents. However, there are ethical issues to consider and challenges to overcome. Can AI tools make decisions free from bias? How will regulatory agencies respond? What ethical responsibilities do legal professionals have? Take a closer look at AI’s impact across the legal landscape.

A Brief History of AI in Legal Work

AI originated in the 1950s with the introduction of machine learning concepts.

The legal field took advantage of the ability to analyze data through algorithmic calculations, implementing machine learning in areas such as predictive coding in electronic discovery, compliance, regulatory investigations, and analysis. Today, predictive coding represents one of many AI-based tools that have become standard in legal work, including email threading, near-duplicate analysis, concept searching/clustering, and natural language processing. AI-based tools also form the basis for specialties such as compliance surveillance/supervision and contract lifecycle management.

Newer types of AI have gained popularity and acceptance in recent years. A prime example is natural language processing (NLP), which incorporates linguistic analysis going beyond mathematical algorithms. NLP leverages lexicons and machine learning to understand and analyze the contextual meaning in human language.

The latest iteration of AI is generative AI (GenAI). While GenAI itself is not “new”, its application in legal frameworks is new. GenAI in legal very quickly led to a great deal of discussion, controversy, and change across the legal landscape.


Ethical Considerations for GenAI in Legal Work

GenAI is a form of artificial intelligence that generates entirely new material based on human prompts. In other words, GenAI “thinks” and generates “original” content based on the query provided and the data it has to pull from. Consequently, there are ethical implications for legal professionals in relying on technology for decision-making and/or content generation processes historically carried out by humans.

“One of the major concerns with generative AI tools is preventing the concept of bias,” Eric Robinson, KLDiscovery’s Vice President of Global Advisory Services & Strategic Client Solutions, said in a webinar on AI and its impact on eDiscovery. “If the data fed into the model inherently leans towards one direction, opinion, context, or concept, then any content generated from it will reflect that same bias.”

The issue of bias—along with other fundamental concerns over accuracy and privacy—has led to calls for AI regulation that have already begun to take effect.


A Regulatory Framework for AI

At local, national, and international levels, the call for more comprehensive AI regulation is gaining momentum. Italy emerged as a pioneer, issuing the first AI regulation within the EU. Similarly, individual states in the US are drafting and enacting AI regulations.

Regulatory agencies in the US, such as the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC), are actively proposing rules to oversee AI implementation in the corporate realm. Additionally, national AI legislation has been proposed in Congress via the Artificial Intelligence Act, while the White House has issued Executive Order 13859, calling for the development of standards regarding the reliable use of AI.

Specific to the legal industry, Robinson believes there will be significant changes in how AI technology is integrated into legal practices over the next 3-5 years. Expect to see increased regulatory scrutiny at the state, national, and international levels, followed by the potential passage of significant national laws governing the development and implementation of AI. In fact, leading organizations like the National Institute of Standards and Technology (NIST) and the International Organization for Standardization (ISO) are working to establish standardized testing and auditing procedures that could lead to national and international standards.

Over the longer term, there will likely be widespread adoption of AI-specific regulations and formalization of laws for liability and accountability. The development of automated compliance tools will help support how AI technologies function and adhere to regulatory standards, which will help legal professionals validate whether AI and GenAI are working as intended and/or within prescribed boundaries.


Ethical Requirements for Legal Professionals

Another area where ethical issues in AI-assisted legal work come into play is professional standards of conduct. Attorneys and even non-attorneys who leverage AI for legal work must uphold certain standards to ensure accurate, unbiased work.

Competence, and specifically technological competence, is covered in the American Bar Association (ABA) Model Rules of Professional Conduct under Rule 1.1. In 2012, the ABA amended Comment 8 of Rule 1.1 to describe how “a lawyer should keep abreast of changes in the law and its practice”—adding the subsequent phrase “including the benefits and risks associated with relevant technology” to the comment. It is important to note that Rule 1.1 and Comment 8 do not require “expertise”. Rather the intent of the language is to ensure that legal counsel is aware and has a basic understanding of the benefits and risks associated with the use of technology. In general, best practice dictates that legal professionals either develop personal expertise or have access to experts who can parse and explain any associated risks and/or benefits.

To date, 40 states have adopted this duty of technological competence into their local rules. The shift represents a major step in encouraging lawyers to better understand technology and AI. However, technical competence does not equate to ethical responsibility and a legal professional’s ethical duties extend beyond competence.

“Within this conversation, there is a requirement under the rules of ethics to communicate to our clients,” Robinson said. “When technologies are being leveraged, the attorney has a duty to communicate this to their client.”

Model Rule 1.4 about communications in the client-lawyer relationship specifies a lawyer shall:

  • “Consult with the client about the means by which the client’s objectives are to be accomplished” (1.4.a.2)

  • “Explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation” (1.4.b)

According to Robinson, a lawyer should explain the objective(s) AI involvement would achieve, the anticipated costs, and any benefits, risks, or drawbacks to using AI.



Take a deeper dive into this topic by listening to a webinar recording about AI’s impact on eDiscovery hosted by Eric Robinson.

Gain insights into how far AI has come, how AI is transforming legal discovery, and what the near future may hold for the legal and legal technology professions.

Watch the Recording