It’s another milestone in the race to artificial superintelligence:
A study conducted by legal AI platform LawGeex in consultation with law professors from Stanford University, Duke University School of Law, and the University of Southern California, pitted twenty experienced lawyers against an AI trained to evaluate legal contracts. Their 40-page report details how AI has overtaken top lawyers in accurately spotting risks in everyday business contracts.
The human participants were made up of law firm associates, sole-practitioners, in-house lawyers and general counsel.
The LawGeex AI was trained using machine and deep learning technologies.
To review five Non-disclosure agreements (NDA), a very common kind commercial document.
The NDA’s contained 30 legal issues and the human lawyers achieved an average accuracy of 85% in identifying them.
Only the top lawyer managed to match the accuracy level of the AI at 94%.
The human lawyers took on average 92 min to review the documents, compared to the 26 seconds by the AI.
AI can replace expertise, but…
While these are impressive results, we should also remember that humans are still have a role to play, though the role and the path to that role will change.
The study shows that AI can catch-up and overtake experts in some areas, including those previously thought a “safe” preserve of humans.
At the same time, the review of NDA’s is a highly logical task, something computers are good at. Especially when trained with a huge set of similar, highly logical data.
Marvelling at these results, we must not forget that without such data sets, AI’s cannot (yet) match human intellectual and creative abilities because they lack the human ability to “infer” from one situation to another.
Ben Dickson, from TechTalks, explains this human ability by reference to video games.
As a human, you know when you start playing a game that falling into a pit or running into a hedge is likely to result in points lost.
You bring your real-world experience to bear.
Even smart AI’s cannot do this (yet) – they have to learn all the rules. Fair enough, they can learn them very fast, but a video game is a limited environment and AI still has to learn every new game from scratch.
That means they need examples and, in most cases, some human, support to steer them in the right direction of learning. This raises the question of building expertise
The challenge of building expertise
If AI still needs humans to learn, but AI’s perform a lot of the basic tasks that were the traditional route to gaining expertise (e.g. document and case law review for lawyers, accounts prep for accountants, surveys, models and simple design tasks for architects and engineers) – HOW will the humans gain the necessary expertise?
Especially, if they still operate in traditional, hierarchical organisations?
Helping their people to gain the learning experience and grow their expertise will be the main challenge for professional services firms.
Creating flat structure, platforms for learning and self-managing teams to create true agility may just be the answer.
How is your firm responding to the challenge?
Miriam has been helping organisations for over 20 years to design and implement agile ways of thinking and working that bring real-world benefits.
As a Chartered Accountant with international consulting experience gained with PricewaterhouseCoopers, she has spent 20+years advising organisations in Financial Services, Transport, Construction, Charities, the Travel industry and the Public Sector, delivering quantifiable bottom-line results.
As a near-term futurist, she helps CEO’s and business leaders plot their course through complex, uncertain and changing environments and devise practical strategies for business success in the digital world.