Future of AI in Law - BC Law Society Podcast brief
The concept of artificial intelligence (AI) has fascinated humanity since the term was coined at Dartmouth College in 1956.
Today, AI permeates various fields, including the practice of law. In a June 2024 episode of LawCast BC, hosted by Vinnie Yuen, esteemed legal professionals Jon Festinger, KC, and Robert Diab delved into the implications of AI for the legal industry.
Festinger and Diab both are faculty members Thompson Rivers University who are expected to be building a new course on the intersection of AI and the law.
Festinger has been at the intersection of media, law, and technology for decades. He recalls incorporating rudimentary AI concepts into his courses on video game law as early as 2005. Jon describes the evolution of AI in three waves: the initial phase involving basic AI in video games, a second wave driven by data and algorithms, and the current era marked by advanced generative AI tools like ChatGPT.
Reflecting on AI’s historical context, Festinger said “It belongs with the invention of the clock, in a sense, the invention of time as we know it, mechanical time, the ability to measure time, you know, has completely transformed our world.”
Diab has similarly been watching AI's impact on the legal field. Although he has yet to teach a dedicated course on AI, his scholarly work and curiosity about its potential influence on legal practice and education keep him deeply engaged.
He told Festinger and LawCast host Vinnie Yuen that AI is already greatly affecting the way students are even beginning to learn how the law works. He said the technology promising but it's still in its early stages, pointing to the fact most tools today are not tailored specifically to legal practice in various jurisdictions. And, since the law varies largely from jurisdiction to jurisdiction, this is an important criteria which shows where AI is lacking right now.
In the podcast, Festinger made it clear that AI use in legal practice must be prescribed sparingly and responsibly. He said one great use of it would be to leverage AI to prompt information or ideas that otherwise may not have emerged; aiding in horizontal thinking.
There are a lot of examples of where AI usage has been unsavoury or downright criminal. Whether it's for clearly unethical and illegal means, such as using AI to clone someone's voice so you can scam money out of their grandparents, other more subtle misuse can also become problematic.
Host Vinnie Yuen asked if there are any good examples of situations where AI has ended with legal trouble to those using it. Festinger made reference to a civil case where an Air Canada customer was asking policy questions from a chatbot, which gave incorrect replies about issuance of refunds. Air Canada eventually was forced to concede that, yes, they must honour a refund policy that their own chat bot seemed to invent on its own.
According to Festinger, AI chatbots these days should be regarded as "an irresponsible 14-year-old", and that users should never treat AI like a search engine since the answers can completely throw someone off if they have too much faith in the near-instantaneous, authoritative explanations produced by chatbots like ChatGPT.
If this can happen to a company as large as Air Canada, it's rather safe to assume lawyers will be liable for the safe and responsible usage of AI tools in their own practice areas.
Diab said the effectiveness of AI tools heavily depends on the user’s expertise in the relevant area of law, noting, “Your effectiveness with these tools is really going to depend on how well you already have internalized the area of law you’re working with.”