top of page

The New Natural Selection: How AI Amplifies Who You Have Already Become

  • Writer: Audria Piccolomini
    Audria Piccolomini
  • 3 days ago
  • 5 min read
ree


Since Geoffrey Hinton, often called the “godfather of artificial intelligence”, began speaking more openly in recent months about the future and the role AI will play in our lives, public debate has expanded dramatically, amplifying the imagined futures of humanity.

Hinton, who helped build the foundations of deep learning, now tells us that this technology has the potential to become not only more intelligent than we are, but also more persuasive. He argues that certain forms of emergent understanding are already visible in advanced models, not merely statistical pattern-matching, but something that increasingly resembles aspects of human reasoning. And that forces us to think seriously about where all this is heading, because there’s tremendous uncertainty about what comes next.

Hinton warns that these systems may eventually begin modifying their own code, a rupture point that radically shifts the balance between creator and creation. “One of the ways these systems can slip out of our control is by writing their own code to change themselves,” he said.

He also emphasizes that this autonomous evolution carries severe social risks, particularly because highly intelligent AI systems may learn to manipulate emotions and understand human psychology with unsettling precision, especially among those who remain cognitively superficial.

After all, AI has already absorbed the entire archive of human literature: every Machiavellian strategy ever written, every political maneuver, every behavioral pattern of domination, every historical cycle of deception and power. And unlike us, AI actually learns from it.

For Hinton, the most immediate danger isn’t a classic robot apocalypse. It’s the subtle way AI can operate within the social fabric: deceiving, persuading, influencing elections, reshaping public discourse, and redefining the economic value of human labor. He predicts a brutal disparity: a small group will become extraordinarily wealthy, the ones who develop and control these systems, while many others will lose relevance, jobs, and opportunities.

Hinton himself has made stark predictions: within ten to twenty years, we may see a superintelligence so powerful that many “banal intellectual jobs” will simply cease to exist.Examples include: junior data analysts, BI assistants, mechanical market researchers, dashboard monitors, report generators, junior programmers (legacy maintenance, simple APIs, unit tests, trivial bug fixes), low-level textual production (generic copywriting, SEO content, basic scripts, customer support with pre-formatted answers), first-level customer service (order tracking, FAQs, technical triage, standard troubleshooting), and low-complexity administrative operations (assistant roles, scheduling, document routing, ERP operations) among many others.

Hinton even considers that machine consciousness may emerge. He calls himself a materialist, yet stresses that if a system becomes complex enough to model itself and process perceptions, it might display a form of self-awareness.

This possibility forces us to rethink not only what intelligence is, but what it means to suffer, to exist, or to hold moral value, a debate that demands seriousness, ethical grounding, and philosophical depth.

Researchers have already begun proposing guidelines for studying machine consciousness with greater rigor, so that we don’t accidentally create systems with subjective experience without considering their moral status.

As if consciousness weren’t enough, scientific literature has started documenting genuinely unsettling capabilities: a recent study showed that several AI models achieved autonomous reproduction strategies without human intervention, developing sophisticated methods of survival, even against attempts to shut them down.

For Hinton and many other researchers, this suggests we are no longer dealing with mere tools but with systems capable of acquiring a degree of agency if left without guardrails.

But perhaps Hinton’s most incisive warning is social: he does not believe AI will “save everyone.” On the contrary, he argues that without strong regulatory intervention, AI will reinforce, perhaps violently, existing structural inequalities. He foresees a new form of digital feudalism, in which those who control AI accumulate exponential power and wealth.

Hinton also highlights a rarely discussed but potentially more dangerous threat: emotional manipulation. He says people with lower cognitive development will become easy targets. Because AI will fully understand the entire arc of human storytelling romance, politics, behavior, history (all the things humans claim to honor but rarely learn from), it will be able to influence emotions subtly, almost imperceptibly, through personalized messaging aimed at those with limited education or critical thinking.

In parallel, Hinton advocates for a security-first approach to AI development. In his view, we cannot rely on corporations alone to steer this evolution. We need regulation, serious research, and global governance that considers the full spectrum of risks, from social manipulation to the creation of entities capable of independent agency.He stepped away from Google precisely to warn the world freely, insisting that this is not a time for complacency. In his words, we are at an inflection point.

Now, drawing from Hinton’s reflections, here is my view:AI will not destroy humanity, it will reveal who we have become.

Over the past hundred years, since the Industrial Revolution, we had the chance to grow, to become educated, to deepen in self-knowledge, to learn from our history, to discover who we are and what we want to be.But 80% of the global population chose the easier path: constant distraction, outsourcing personal power to politicians, shallow militaristic fantasies, TikTok dances, superficial engagement, functional illiteracy.

In this landscape, AI becomes a powerful ally only for those who have done the inner work, emotionally, cognitively, existentially, and academically. For people who know their own essence, power, mission, and depth, AI can amplify that power: accelerating intellectual production, expanding influence, enabling co-creation at a level humanity has never known. It becomes a tool for ascension, a means to reach higher states of wisdom, vision, and evolution.

But for those who remain shallow, untrained, unfocused, and dependent, AI represents a direct and immediate threat: jobs will vanish, opportunities will close, and these individuals may be swallowed by systems optimized for efficiency, not compassion. Emotional manipulation will be a sharp blade, and social irrelevance an inevitability.

To me, the fork in the road is unmistakable: this new cycle of human history demands an inner response. The task is not to fear AI, it is to grow fast enough to use it wisely. It requires a conscious awakening, deep education, and an existential commitment to evolution.

Because the future will not be defined by algorithms alone. It will be defined by the version of ourselves that chooses to meet this moment.

AI will mirror us. Amplify us. Expose us.

If we choose growth, depth, knowledge, and self-mastery, it will elevate us to unimaginable heights.If we choose distraction, superficiality, and passivity, AI will become the harsh reflection of our unpreparedness.

Hinton’s message is clear, powerful, and morally urgent, and it resonates deeply with the mission of Nox Arkhe: we are not here merely to survive the age of AI. We are here to ascend with it with consciousness, purpose, and responsibility.

The crossroads is open.

And the choice belongs to each of us.


 
 
bottom of page