top of page
Search

AI Is Accelerating. Human Development Isn’t. That’s the Risk.

As the U.S. rolls out an AI Action Plan[1], an important first step in recognizing AI as a national capability with direct implications for workforce development and education, prominent voices across the AI spectrum are debating the risks of “Superintelligence”. Some, like the Future of Life Institute, argue for strong limits or prohibitions[2]. Others push back against those constraints. Regardless of that debate, the conversation once again frames AI as the sole object of concern.


What’s missing is a familiar blind spot that has shaped every major technological transition, the assumption that technology evolves while humans remain essentially static.


That assumption is the real risk.


Over the past year, while public debates have focused on controlling machines, I’ve been working in a different lane, the human side of human–AI collaboration. My recent paper, on "Cognitive AI" explores AI not as an autonomous intelligence to fear or constrain, but as a cognitive collaborator that can catalyze intentional neuroplasticity and support human development when used deliberately. In parallel, my work on the AI Second Brain protocol examines how intelligence is already becoming distributed across multiple AI systems, tools, and environments, with humans functioning as the integrative layer that gives the system coherence.


From this perspective, today’s AI moment looks very different from much of the media discourse. We are not waiting for artificial general intelligence, or whatever “superintelligence” might eventually mean, to arrive fully formed. We are already participating in the early stages of a hybrid intelligence system, where no single AI is “general,” and where general intelligence begins to emerge across ecosystems, provided a human remains in the loop as the executive, synthesizer, and meaning-maker.


The debate we’re having about AI would look very different if we acknowledged that the future of intelligence will not be decided by machines alone.


Intelligence Is Already Becoming Distributed


One of the quiet realities of the current AI moment is that no single model, platform, or system contains what we intuitively associate with general intelligence. Instead, intelligence is increasingly distributed across ecosystems. An AI collaborator might use one model for writing, another for research, another for coding, another for visual synthesis. Context lives in documents, memory lives in notes, planning lives in task systems, reflection happens in dialogue, and “intelligence” emerges only when a human integrates across these components.


In other words, we might say that general intelligence is already emerging across networks of systems, with humans serving as the integrative layer. Early AGI is not an autonomous machine, but a hybrid system in which intelligence arises between components. In this limited but meaningful sense, early forms of general intelligence may already be emerging, not as autonomous machines, but as hybrid systems that require human participation.


The Human as Coherence 


In today’s AI ecosystems, humans play a role that is rarely named explicitly. Without a human in the loop, current AI systems cannot meaningfully coordinate with one another. They do not share persistent identity, long-term narrative continuity, or embodied stakes in outcomes. When humans are removed, coherence collapses. This suggests something important. Humans do not have to be simply “tool-users” or, at an advanced level, merely “collaborators” with AI. We can become required components of the intelligence system itself, unlike the assumption in some recent commentary in MIT Technology Review, which implicitly expects humans to be optional once systems become sufficiently capable[3]


The Challenge for the Human


Much of the fear surrounding superintelligence assumes a future in which machines evolve rapidly while humans stand still. If AI advances while human cognition, perception, and sensemaking remain underdeveloped, the gap becomes destabilizing. That scenario is indeed dangerous. But the danger does not come from intelligence itself. Humans lose agency not because machines become conscious, but because humans fail to evolve the cognitive capacities needed to work with complex systems.


Human cognition is not fixed. Decades of neuroscience demonstrate that the brain reorganizes itself in response to sustained patterns of attention, reflection, and engagement. Neuroplasticity is not a metaphor. It is a biological fact. What AI introduces is a new kind of cognitive environment. When used intentionally, AI can surface assumptions, expand perspective, challenge mental models, and accelerate learning. When used passively, it can also reinforce dependency and shallow thinking. The difference is not the tool. It is the intentional practice of the human using it.


This is the conversation missing from both the AI Action Plan and superintelligence prohibition movements. Neither seriously addresses how humans must train their minds, rewire their brains, and evolve their sensemaking capacities to remain viable participants in an intelligent ecosystem.

My recent paper provides an approach to such cognitive development. Read it HERE


AI is not the end of human intelligence. The question for 2026 is not whether AGI will arrive. It is whether humans will rise to meet the intelligence that is already forming around them.


Sources:


Comments


bottom of page