Business systems improvement services and fractional/virtual, on demand Chief Information Officer - Wellington and New Zealand wide

Is humanity intelligent enough for artificial intelligence?

Sun May 03 2026
Is humanity intelligent enough for artificial intelligence?
Thoughts from a Chief Information Officer: What is AI? What is intelligence? Is humanity, in the aggregate, intelligent enough to use it wisely?
I watched a reel on Mini Philosophy by Jonny Thompson that deeply resonated with me on this subject, to summarise, Jonny talked about AI’s being fed the breadth and depth of all human knowledge yet, according to the Polanyi problem, they don't know everything a human knows.
The philosopher Michael Polanyi argued that there is a difference between explicit and tacit knowledge. Explicit knowledge is generated by us, it’s what we feed our AI algorithms. Tacit knowledge is what we know but cannot explain. It might be a cognitive ability, intuition, gut feeling or vibes.
The Greeks called this phronesis. This is the practical wisdom that comes in knowing what is right or wrong without knowing entirely why or how we know it.
LLM’s can’t be trained on vibes only the explicit knowledge we have produced thus far. LLM can only know a fraction of what Humanity knows, you cannot put phronesis and practical wisdom in a database.
My take is that AI, be it generative, machine learning, agentic, etc., is not intelligent in human terms; it’s a very high-powered and very fast pattern matching machine operating on massive amounts of data whose results are the statistical average of what it matches, so by definition it is marginalised.
AI is just a prediction engine. Okay, that prediction can be very accurate but it’s questionable whether you could define that as “intelligence”.
This brings me to think about the definition of intelligence and if that definition needs to be refined. If we consider the scale of intelligence across humanity, we go from total geniuses to the intellectually challenged, it’s a very big range.
I think the more important factor to consider is what intelligence is used for. High intelligence does not mean that it's used for good, and the reverse is true too.
I also recently watched a reel on democracy which highlighted that a key part of democracy is giving equal weight to everybody's opinion regardless of where they sit on the intelligence scale. Historically this has resulted in the voices of the ignorant outweighing the voices of the wise. We reward the person who shouts the loudest, who's got the most charisma, who can sway the most opinions, who promises the most benefit without the means to deliver. This results in democracy delivering bads not goods to the society it is meant to represent. These loud voices effectively drown out wisdom.
Is that what we're doing with AI, drowning out wisdom? Given that we are building models based on explicit knowledge, which is overrun with people shouting the loudest, how do we filter out the bads and determine what the goods are, and who determines what the goods are? Because that comes down to perspective, doesn't it?
In Humans’ perspective, context and opinion vary wildly, and this impacts how we each classify goods and bads. There are generally accepted goods and bads, like how murder is considered bad by most of society. But there is also a huge amount of grey where opinion materially shapes our world.
Humans are prone to mental models that our own perception, context and opinion is correct. We all have predispositions and biases, often unconscious ones. We tend to dismiss anything that refutes our mental models. This can effectively stifle discussion, dialogue, and genuine debate. New thinking, different perspectives, and being tangential about implementing goods for humanity is diminished.
We also have the social media ecosystem that amplifies those voices that are not necessarily wise. Those voices that are shouting the most are often the most ignorant, but they think they know what they're doing. Yet those who are truly expert and have become masters recognise that they do not know everything and they are still willing to learn. The Dunning Kruger effect. The wise therefore appear less confident and convincing than the loud voices, and wisdom is drowned out.
Added into this is the current hype cycle with AI being de rigueur, AI wash, and AI FOMO. This is often based on the assumption that everyone else is using AI wisely, with highly refined and accurate models, with excellent prompting, etc, which in my experience is definitely not the case. The vast majority who are using AI are just creating AI slop, and it can be convincing slop, especially with biased prompting on biased data sources with a preconceived idea of the answer. When your biased LLM and biased prompting gives you the answer you wanted, then it's confirmation bias, and you think AI is great, but it can all be hallucination while you are in your comfortable echo chamber.
Getting back to tacit knowledge that Jonny talked about; to me this is where humanity can really shine, It’s that aha! Eureka! moment that makes us better as a species. If we rely too much on AI without the appropriate controls, and the AI is informed by our collective knowledge that is tainted and toxic because we drowned out the wise, what results do we expect?
Coming back to my question about refining the definition of intelligence. No, I don’t think we need to refine it, we need to acknowledge its diversity and act accordingly. Intelligence comes in different forms; AI should be used with that in mind.
My catch phase is “AI is a partner, not the answer”. Have fun out there!


Share This:

More Blog Posts:

Get in touch...

Let’s talk!

Business systems improvement services and fractional/virtual, on demand Chief Information Officer - Wellington and New Zealand wide