By Nakkul Mahajan | 06 March 2026
As AI systems grow more capable, the ambition surrounding them is shifting. The goal is no longer just automation, but the emulation of the human mind. From Google’s DeepMind to Meta’s superintelligence labs and OpenAI’s pursuit of superaligned AGI, the frontier has moved from solving tasks to reproducing elements of human cognition like structured reasoning, contextual judgment, emotional fluency and awareness.
These systems can already write stories, build pitch decks, and hold conversations. They may soon be able to autonomously generate and execute long-horizon strategies. But one question still remains: Do they truly understand or just mimic understanding? In chasing human-like outputs, the industry may be confusing performance with understanding, overlooking the deeper challenge of context, intuition, and genuine awareness.
In 1950, Alan Turing suggested a practical way to think about machine intelligence: if a computer could hold a conversation so well that we couldn’t tell it apart from a human, perhaps that would be enough to call it intelligent. But his test focused only on what we see from the outside. It judged the answers, not what was happening within. A machine might sound human without truly understanding anything at all, without possessing insight, reflection, or awareness. It could follow patterns and rules so convincingly that we mistake imitation for genuine intelligence.
So are we truly heading towards general intelligence or just getting better at creating the illusion of it?
What may still be absent in these systems is a genuine sense of consequence, the capacity to anticipate how decisions propagate beyond the immediate data, shaping incentives, institutions, and downstream outcomes. The deeper limitation isn’t access to information, it’s reliable judgment under uncertainty, with the discipline to verify, to doubt, and to explain its reasoning in ways we can audit. We keep humans in the loop for a simpler reason: accountability. High-stakes decisions aren’t just computations, they’re commitments. Someone must own the trade-offs, justify the action, and bear responsibility when reality pushes back.
That commitment is shaped by lived experience. Our choices are filtered through years of successes, failures, and quiet traumas that shape how we perceive risk. Two people can examine the same facts and imagine very different futures because their histories shape what they fear and what they value.
Sometimes that experience goes further. A person may believe in a future that appears statistically improbable, even irrational — and still pursue it. Sustained conviction can reshape outcomes. Persistence can turn what once seemed unlikely into something real. We still cannot explain how such conviction takes root in the mind, or why it survives uncertainty.
Beneath all our judgments, intuitions, and convictions lies a structure we barely comprehend.
We still do not fully understand the brain, and it remains one of the least understood organs in the human body. Roughly 86 billion neurons, linked through hundreds of trillions of synapses, generate thought, memory, imagination, and identity — yet we cannot reliably predict which connections will flare at any given moment to produce a new idea or a sudden shift in perspective. We can map pieces of the machinery — plasticity that rewires with experience, synchrony that binds distant regions in real time, and the network dynamics that shape how signals cascade across the system — but we still do not know why any of it is accompanied by an inner point of view at all. Theories such as Global Workspace Theory and Integrated Information Theory attempt to describe how consciousness might emerge from distributed activity, yet they stop short of the deeper mystery: how physical processes in the brain become lived experience. Why do certain neural patterns lead to the feeling of being “you,” to seeing the colour red, or to remembering a childhood moment? Why does one burst of activity produce sudden clarity, and another a gut conviction that defies the data? These deeply personal, first‑hand experiences are known as qualia, and they lie at the heart of what’s often called the “hard problem” of consciousness (Chalmers, 1995). We’ve come a long way in understanding how the brain processes information; but we still don’t know why or how that translates into actual feelings.
Neuroscientist David Eagleman famously remarked: “We are only at the foot of the mountain in understanding the brain.”
Even if AI someday imitates the human mind perfectly, imitation alone wouldn’t prove experience. It may speak with elegance, reason with precision, and mirror emotion, yet we have no evidence of an inner point of view, no qualia, no first-person “being.” Accountability explains why we keep humans in the loop; consciousness explains why we still feel the difference. Human judgment isn’t just pattern-matching — it’s meaning-making, shaped by consequence, imagination, and the desires that guide our choices.
I've seen this firsthand in Investment Committee meetings. The narrative is solid, the market timing checks out, the discussion is moving toward a “yes” — and then someone pauses. They don't introduce new data. They ask something that reframes what's already there: what happens to this thesis if the regulatory window closes six months early? The facts don’t change, but the room does. That instinct — doubting an almost airtight case, and imagining what the framework missed — is hard to formalize.
Which raises a question worth sitting with: are we racing toward fully autonomous systems without first grappling with what makes intelligence more than output? The path forward may require more than bigger models and longer context windows. It may require a deeper reckoning with the mind we're trying to emulate — how it produces meaning, how it generates judgment, how raw information becomes something a person is willing to stake a reputation on. If we’re building systems meant to think, it’s worth pausing: we still can’t explain how neural activity gives rise to an inner point of view — let alone how to engineer it. We’re the architects of these systems, and until we understand our own cognition, we may keep building increasingly capable machines while remaining uncertain about the nature of what we’re building.
One day, we may look back and realize the greatest AI breakthrough was learning what it means to be human.