Industry experts say a pending BC Supreme Court case could provide clarity and perhaps even set precedent on the use of AI models like ChatGPT in Canada’s legal system.
The high-profile case involves bogus case law produced by ChatGPT and allegedly submitted to the court by a lawyer in a high-net-worth family dispute. It is believed to be the first of its kind in Canada, although similar cases have surfaced in the United States.
“It is serious in the sense that it is going to create a precedent and it’s going to provide some guidance, and we’re going to look at it in a couple of ways,” Jon Festinger, KC, an adjunct professor with UBC’s Allard School of Law told Global News.
“There’s the court proceedings around costs … The other part of this is the possibility of discipline from the Law Society in terms of this lawyer’s actions, and questions around … law, what is the degree of technological competence that lawyers are expected to have, so some of that may become more clear around this case as well.”
Lawyer Chong Ke, who allegedly submitted the fake case, is currently facing an investigation by the Law Society of BC
The email you need for the day’s top news stories from Canada and around the world.
The email you need for the day’s top news stories from Canada and around the world.
The opposing lawyers in the case she was litigating were also suing her personally for special costs, arguing they should be compensated for the work necessary to uncover the fact that bogus cases were almost entered into the legal record.
Ke’s lawyer has told the court she made an “honest mistake” and that there is no prior case in Canada where special costs were awarded under similar circumstances.
Ke apologized to the court, saying she was not aware the artificial intelligence chatbot was unreliable and she did not check to see if the cases actually existed.
UBC assistant professor of Computer Science Vered Shwartz said the public does not appear to be well educated enough on the potential limits of new AI tools.
“There is a major problem with ChatGPT and other similar AI models, language models: the hallucination problem,” he said.
“These models generate text that looks very human-like, looks very factually correct, competent, coherent, but it might actually contain errors because these models were not trained on any notion of the truth, they were just trained to generate text that looks human -like, looks like the text they read.”
ChatGPT’s own terms of use warn users that the content generated may not be accurate in some situations.
But Shwartz believes the companies that produce tools like ChatGPT need to do a better job of communicating their shortfalls, and that they should not be used for sensitive applications.
She said the legal system also needs more rules about how such tools are used, and that until guardrails are in place the best solution is likely to simply ban them.
“Even if someone uses them just to help with the writing, they need to be responsible for the final output and they need to check it and make sure the system didn’t introduce some factual errors,” she said.
“Unless everyone involved would fact-check every step of the process, these things might go under the radar, it might have happened already.”
Festinger said that education and training for lawyers about what AI tools should and shouldn’t be used for is critical.
But he said he remains hopeful about the technology. He believes more specialized AI tools dealing specifically with law and tested for accuracy could be available within the next decade — something he said would be a net positive for the public when it comes to access to justice.
BC Supreme Court Justice David Masuhara is expected to deliver a decision on Ke’s liability for costs within the next two weeks.
— with files from Rumina Daya
© 2024 Global News, a division of Corus Entertainment Inc.