top of page
Search

Implications of Generative AI in the Legal Industry and the EU AI Act

By Gina F. Rubel


The applications of generative artificial intelligence (AI) in the legal industry are evolving at a pace that has both inspired and alarmed the legal world. In conference rooms everywhere, attorneys are discussing the ways generative AI will affect law firms and in-house legal departments.


Many legal professionals view various forms of AI, including generative AI Large Language Models (LLMs) like ChatGPT, as a time-saving tool that offers a good starting point for their substantive work, much like a better pole position offers an advantage in a Formula-1 race.


Others focus on instances like the New York case in which a litigator submitted to the court a ChatGPT-generated brief littered with fake citations to authority. An analysis of the media coverage of this incident by the public relations and crisis communications firm Furia Rubel Communications—using AI tools—found that within a month, the story had garnered more than 250 headlines and 700 million views. People were exposed to this negative side of lawyers’ use of generative AI because the lawyers used ChatGPT in a way it was never intended.


When generative AI produces information with no factual basis, it is called a hallucination, part of the predictive aspect of this modern technology which is based on numeric algorithms.


Hallucinations may “come, amongst others, from the principle of these LLMs, because the primary task remains the prediction of the next ‘token,’ or the next word in the string of words” according to Éva Kerecsen, chief legal counsel at the global automotive software company NNG LLC, headquartered in Budapest. Here’s an example:



A Measured Assessment

Given the above, a measured assessment is crucial. Attorneys should not reject a tool out of hand if it can improve their productivity. Nor should they compromise their ethics because they are caught up in the hype.

“People can sometimes overlook important values in their race to get ahead,” Kerecsen said, “such as security, privacy, and basic human rights. Today, we’re seeing a sort of ‘digital gold rush’ that raises a lot of legal and ethical questions. As legal professionals, we have an important role in ensuring that these human values and rights are not forgotten amid rapid technological advances.”


In addition to serving as NNG’s chief legal counsel, Kerecsen has a private technology practice. These dual roles ensure a keen awareness of the risks and benefits of AI.


Kerecsen divides the risks into three categories:

  1. Security – everyone wants to know AI tools are safe.

  2. Trade secret and confidentiality issues.

  3. Copyright and intellectual property issues.

On the positive side, Kerecsen sees three main benefits of AI tools developed specifically for the legal market:

  1. Efficient review of documents.

  2. An efficient filing system to facilitate due diligence, filter documents wisely, and respond to specific questions.

  3. Quick preparation of summaries for meetings.

European Regulation

Of course, one way to balance the risks and benefits is sensible regulation. In June 2023, the European Parliament approved the European Union Artificial Intelligence Act, an important step toward making the act law in the EU.


Kerecsen said the AI Act is a way to “protect human values and European citizens.” If the act becomes law, this will not only affect companies based in Europe but any company providing AI services to European citizens.”


The AI Act uses a risk-based approach, placing AI tools into four different categories.


First, AI tools considered to be unacceptable risks will be banned in Europe. For example, Kerecsen described the Social Credit Scoring (SoCS) system operating in China, which may affect various rights such as travel, education and financial credit. Such types of practices are “going to be banned in Europe to protect human rights and human values. I absolutely agree with this approach. The regulatory body must protect the citizens.”

The second category in the AI Act covers tools considered highly risky. Kerecsen explained that, depending on the sector, users and developers in this category will have to comply with specific requirements.

“The management and operation of critical infrastructure such as transportation or healthcare fall into this category,” Kerecsen said. For example, if drivers opt to use systems based on AI algorithms—say, a self-driving car—the software developer and the car manufacturer will need to make an impact assessment as well as register the AI.

Some NNG products may fall into this category, noted Kerecsen. “This is going to be a huge administrative and evaluative undertaking.”


The third category is limited risk, which includes LLMs like ChatGPT. For this category, “the main rule is transparency,” Kerecsen said. “Those who develop and use ChatGPT [or other similar generative AI tools like Jasper or Bard] have to disclose that the content was generated by an AI tool and have to publish summaries of the copyrighted data used for the training.”


This category is likely to give rise to a whole other conversation as governments, companies and creative professionals wrangle with the issue of fairness in the context of the copyrighted materials AI developers use to train their large language models.


Kerecsen explained that those who do business in Europe and want to use limited-risk AI will face a complex situation: “On top of the AI Act, the companies must comply with the GDPR (General Data Protection Regulation). For example, if we put client data into ChatGPT, we not only have to comply with the transparency requirement, we have to properly document the legal basis for handling the data and comply with the recommendations of the bar associations.”


The fourth category of the AI Act is low and minimal risk, which carries no obligations.


Human Evaluation

Despite the complexity, Kerecsen believes generative AI can be used responsibly and as a useful tool. “It can research, summarize and help lawyers write an email or memo on a certain topic.”

The tool may help clients conduct AI-facilitated research. They may come into a meeting well-prepared as long as the client is also aware of the technologies’ limitations and risks.


On the other hand, clients may reach erroneous conclusions. “This may be due to a hallucination or the lack of legal structure and thinking,” Kerecsen said. “You may have the wood and other materials, but if you are missing structural elements of a house design, the house will collapse.”

This analogy also emphasizes the importance of human evaluation. While some worry that legal jobs will disappear with the onslaught of generative AI [and other AI tools], the fact-specific nature of the work and the need for human eyes likely mean this anxiety is ill-founded.


“Some people may prepare and research using AI, but these tools will never win a case for you,” Kerecsen said. “They will never negotiate a contract in a way that you would like. Large language models can be great tools to facilitate our everyday work, but they will never understand human reasoning and humanity, feelings, and emotional intelligence that play into a case and its outcomes.”


Any practicing attorney knows the human element is critical. AI can’t build client relationships. AI cannot opine. AI cannot reason. AI has no empathy.

Often, a client’s goals and motivations stem from human feelings. “Sometimes it’s anger,” Kerecsen said. “Sometimes they just want to win or be made whole. It’s important for the lawyers to understand what’s behind the dispute.”

Moreover, an attorney may need to counsel a client to settle a high-risk case, prudent advice that a machine is unlikely to provide.

The human touch makes for better lawyers with distinctive characters. While it may only be a matter of time until a company comes out with a large language model that’s built on emotional intelligence, Kerecsen questions whether AI can ever have true emotional insight or connection.


Leveraging Large Language Models

As an initial matter, attorneys must remember ChatGPT and its counterparts were never made to be a case research tools.


“These are a general tool, and not meant for legal research,” Kerecsen said. “Everyone needs to test its capabilities, its functionalities, and the potential legal implications.”

Attorneys need to acknowledge the limitations of large language models and always verify the information, but not everything an attorney does has to meet the strict standards of a court. A tool like ChatGPT can be an enormous time saver for tasks that are not substantive client matters.

If, for example, an attorney has only a short time to prepare for two speaking engagements, each specifically tailored for a unique theme and audience, AI tools can help. While they may not generate all that much useable copy, the tools can give the speaker ideas – a starting point. And most find it is much easier to edit a draft than start with a blank page.


Policies and Guidance are Key

Given that large language models were trained using information scraped from the internet, flaws are bound to surface. Kerecsen advises companies and law firms to establish specific generative AI use policies. Firms should follow the recommendations of their local bar association regarding attorney-client privilege and rules on confidentiality and privacy.

Unfortunately, law firms—especially large ones—tend to be reluctant to adopt new policies because the process can be contentious. The partners need to remind themselves that it’s no different than a policy regarding the use of email or social media. These guidelines are necessary to protect the firm, its clients, and its employees.

Without such a policy, bad things can happen. Kerecsen offers Samsung as an example. When the company’s employees used ChatGPT without guidance, they inadvertently exposed trade secrets. Samsung reacted with a temporary ban. While Kerecsen doesn’t think a ban is the right approach, the incident demonstrates the need for internal regulation of AI.

Some law firms have banned the use of large language models and other tools rather than doing the work necessary to adopt them safely, a measure that may prove short-sighted.

To enjoy the benefits of AI while deftly managing the risks, the legal profession needs to stay abreast of the latest developments and best practices. Professional social media platforms like LinkedIn are often a good source of information on the latest tools and policy discussions. Attorneys can also attend conferences and webinars and listen to podcasts.

A firm’s service providers may be good sources of assistance. Furia Rubel, for example, recently launched a generative AI resource center that is updated daily to reflect the rapid pace of change.

“I’m excited about AI,” Kerecsen said, “I feel the enormous potential and I see how it facilitates my everyday life. I’m so happy to be able to use it, but I also see the dangers and risks.”

 

About the Author(s)


Gina F. Rubel, Esq. (L) supports corporate and law firm leaders with high-stakes public relations, reputation-changing initiatives, crisis planning and incident response support, including high-profile litigation media relations. As the CEO and general counsel of Furia Rubel Communications, Inc., she leads the agency to support professional service firms. She is the host of the On Record PR podcast and the author of Everyday Public Relations for Lawyers, 2nd Edition. Email: gina@furiarubel.com; Website: www.furiarubel.com LinkedIn: https://www.linkedin.com/in/ginafuriarubel/


Éva Kerecsen (R) is Chief legal counsel at NNG LLC for almost 10 years, Éva Kerecsen is an experienced legal professional with a passion for ensuring transparent and compliant operations in the dynamic world of technology.


Éva oversees approximately 800 legal issues per year, spanning copyright, e-commerce, IT law, employment law, commercial law, and data protection.


Éva holds a Bachelor of Law from Pázmány Péter University, Faculty of Law, a master’s degree in information technology law from the University of Pécs and is currently studying data protection law at Eötvös Lóránd University, Faculty of Law. With a keen focus on bridging the gap between law and technology, Éva is a sought-after expert on the intersection of legal issues and technological advancements. E-mail: eva.kerecsen@nng.com


Comments


bottom of page