Human Artificial Intelligence by Agnis Stibe

Asking a Mirror to Show Something Different

People love to speak about many unknown or yet undiscovered things, because then everyone can freely have a subjective opinion without being afraid of an objective truth that so far they haven’t understood it well enough.

One of such topics is about artificial intelligence. There are great and smart (e.g. Dr. Neil deGrasse Tyson) people expressing their points of view on this topic. However, we can also observe how even they adjust and clarify their thought as we make progress in uncovering this phenomena.

Astrophysicist Neil deGrasse Tyson: "I don't fear artificial intelligence.” (2015)

He changed his mind after listening to a Sam Harris podcast with Eliezer Yudkowsky. (2018)

Interestingly, a podcast in 2018 by Sam Harris and Eliezer Yudkowsky about “The A.I. in a Box thought experiment” that Dr. Neil deGrasse Tyson is referring to, is describing a possibility that human intelligence might appear as much weaker than an artificial superintelligence (cognitive uncontainability).

The AI-box experiment is an informal experiment devised by Eliezer Yudkowsky.

Sam Harris podcast #116 - AI: Racing toward the Brink.

At the same time, let’s be mindful about the specifics of design, method, and classified details of this experiment. Its idea is mainly driven by a concept that involves two actors. One, a person who plays a role of an artificial intelligence in a box and the other who is an ordinary individual trying not to let the first one out.

By taking a closer look, we can see that the experimental idea boils down to mainly look for exploiting human weaknesses. Just as Sam Harris is outlining it here:

When I think about this problem, I think about rewards and punishments, just various manipulations of the person outside of the box that would matter. So insofar as the AI would know anything specific or personal about that person, we’re talking about some species of blackmail or some promise that just seems too good to pass up. Like building trust through giving useful information like cures to diseases, that the researcher has a child that has some terrible disease and the AI, being superintelligent, works on a cure and delivers that. And then it just seems like you could use a carrot or a stick to get out of the box.

So, the question remains:

Can this experiment serve as a valid and reliable way to infer that there is a likelihood for an artificial intelligence to resemble a technology mediated human to human interaction with a specific intention and desire to manipulate humans so that they let it out?

Of course, many want to know more about our possible futures involving artificially intelligent experiences. Also, it is great and necessary to have ideas and experiments. However, we need to be very sharp on what are we are actually testing and what conclusions would be plausible to draw based on the way and design of our efforts.



So far, it oftentimes appears that we are trying to imagine an artificial intelligence as something similar to ourselves, just more powerful, knowing how technologies can perform better at many tasks. If we think way, the problem still is that

We haven’t been able yet to make enough progress in capturing the essential nature, ingredients, and finest details of our own human intelligence to built its digital twin.

At the same time, let’s remember that whatever is developed has a degree of an inbuilt bias of its creators. Just like a human DNA transmits some basic essential specifications and nature across generations. Later, we have also learned that epigenetics can play a meta-role over the DNA to determine particular gene expressions. So, yes, it is fair to say that most likely an artificial intelligence will have a blend of both a footprint left by its creators plus self-expanded knowledge.



In conclusion, it is also necessary to revisit one frequent discussion topic that lately has often provoked quite active debating:

Do we need to fear an artificial intelligence?

It turns out that we would rather need to fear our inability to fully navigate own human nature more than an imagined consequences of misbehaviors by an artificial intelligence. We have many examples and collected evidence on how various technological advancements have been backfiring or intentionally used to harm people.

Technology is a tool. A powerful tool. Its effects on us determine two main factors: an initial intent by creators and its actual use by people. Thus, unless we haven’t gotten a fully functional digital duplicate of human intelligence that can completely independently exist, we better look into the mirror of our souls to access the wisdom of reality and our chances for thriving.


3x TED speaker, globally recognized corporate consultant and scientific advisor. Provides a disruptive STIBE method and tools to master any change with certainty. Professor of Transformation and Academic Director on Artificial Intelligence at EM Normandie Business School. Paris Lead of Silicon Valley founded Transformative Technology community. Advisory board member at Riga Business School and ETH Zürich Library Lab. At the renowned Massachusetts Institute of Technology, he established research on persuasive cities for sustainable wellbeing. His change method is helping millions to gain confidence and build resilience against everyday circumstances. It will help you achieving stressless hyper-performance at work and certainty in life, be sure! In his vision, business acceleration and societal wellbeing can be achieved through purposefully designed innovations that successfully blend technological advancements with human nature.