Despite all the hype over ChatGPT and generative AI, the fact remains that lawyers largely have not adopted these tools into their typical workflows.
Research released last month from Thomson Reuters showed that just 3% of respondents at law firms are currently using generative AI, while a third (34%) said their firm was still considering whether to use generative AI for legal operations.
The survey – which polled 440 lawyers at large and mid-sized firms in the U.S., UK and Canada – also found that attorney trust of technology will be key to better user adoption of AI.
“A large portion of respondents had concerns with use of ChatGPT and generative AI at work — 62%, which included 80% of partners or managing partners. Further, many of the concerns voiced in our survey seemed to revolve around the technology’s accuracy and security, most specifically about how law firms’ concerns of privacy and client confidentiality will be addressed.”
Accuracy concerns with generative AI are well-founded, given that the technology can “hallucinate” and make up things that are not true, using very convincing language. ChatGPT has, among other wild tales, made up sexual harassment allegations against a law professor, the Washington Post reported, and fabricated a story about how James Joyce and Vladimir Lenin first met at the Café Odeon in Zurich, none of which is verified information, The New York Times reported.
Inaccuracies like these don’t just undermine lawyers’ ability to trust the work product of generative AI. It also potentially negates the efficiency of deploying AI tools, if users must re-check every piece of work an AI tool has performed.
That’s exactly why BlackBoiler launched its ContextAI feature, which allows attorneys to gain unprecedented insight into why the technology is suggesting the contract redlines it is suggesting. ContextAI enhances BlackBoiler’s markup functionality by providing the reasoning behind every redline. Powered by an enterprise’s contract playbook, our automated contract markup platform spots issues and inconsistencies, creates redlines and, with ContextAI, identifies the rule that prompted the change, explains why the change was made, and shows users multiple examples of how others in their organization have edited the same or similar language in past work.
For lawyers contemplating using AI, it’s also important to understand where generative AI fits in to the greater content of artificial intelligence. Generative AI – technology that can produce various types of content, including text, imagery, audio and synthetic data – isn’t necessary for automated contract review and negotiation. What has vastly improved the quality and speed of automated contract review are large language models, or LLMs.
LLMs are deep learning algorithms that can recognize, summarize, translate, predict and generate text and other content based on large datasets. BlackBoiler leverages LLMs to supercharge high-volume contract review. You can view our AI patents here.
As AI technology becomes increasingly sophisticated, the onus is on companies like ours to make AI more “explainable” and contextualized so that attorneys and enterprises can feel comfortable adopting it.