reader comments
94 with
David Wakeling, head of London-based law firm Allen & Overy’s markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system’s developer, to run a small experiment. A handful of his firm’s lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients.
The trial started small, Wakeling says, but soon ballooned. Around 3,500 workers across the company’s 43 offices ended up using the tool, asking it around 40,000 queries in total. The law firm has now entered into a partnership to use the AI tool more widely across the company, though Wakeling declined to say how much the agreement was worth. According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more. Other large law firms are starting to adopt the platform too, the company says.
The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.
commercial moment, being touted as the future of search, sparking legal disputes over copyright, and causing panic in schools and universities.
The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.
“Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”
But the problems with current generations of generative AI have already started to show. Most significantly, their tendency to confidently make things up—or “hallucinate.” That is problematic enough in search, but in the law, the difference between success and failure can be serious, and costly.