It scares me sometimes when I think about the big decisions I’ve made on gut feel and will probably continue to make relying on my instincts.
How many research papers do we need to read or edicts from top class CEOs before we get the message that in every organisation, it all comes down to the people?
Spectacular recent developments in Artificial Intelligence (AI) are feeding many fantasies in the world of cybersecurity. Almost everything can be heard on the topic, from the looming obsolescence of even the best defence solutions to an open war between AIs developed by various tech powers – including states. It often feels very complicated for executives to prepare themselves for what’s ahead.
It’s another milestone in the race to artificial superintelligence:
A study conducted by legal AI platform LawGeex in consultation with law professors from Stanford University, Duke University School of Law, and the University of Southern California, pitted twenty experienced lawyers against an AI trained to evaluate legal contracts. Their 40-page report details how AI has overtaken top lawyers in accurately spotting risks in everyday business contracts.
Part 4 — What do we want?
Part 3 — Who to trust?
Part 2 — Who is accountable?
Part 1 — Who is control?
When the world began tinkering with artificial intelligence and machine learning, they were hardly a threat. Then Deep Blue and AlphaGo came along. The world began to realize that it is possible that under certain defined situations, AI could be smarter than human beings are. Then AlphaGo Zero came along and nightmares of world dominance re-emerged.
We’ve all seen films where Artificial Intelligence replaces humans on-mass – and much debate swirls around just how much of this could ever be a reality.
Meet Erica – perhaps the world’s most advanced, human-like robot yet. She demonstrates that we may not be too far away from silver-screen-like AI workers.