A.I. Is Scaling Faster Than Our Capacity To Stay Human
As A.I. exponentially crosses sectors, it outpaces human relevance. Glen Slater warns it may drive workforce displacement while quietly eroding our capacity for independent thought, identity, and meaning.
When I tuned into the seminar room for “Jung, A.I., and Psychological Integrity,” I knew to expect a Jungian POV. Most talks I attend about A.I. end up being about efficiency or the next big app, but this one threw me off, in a good way. Instead of ROI or automation, Slater asked us to pause and think about how all this innovation might actually be shaping us, deep down. He goes into detail about how this is affecting us psychologically and what that means for society.
Building on themes from his book Jung vs. Borg, Slater contends that the most consequential impact of A.I. may not be technological, but human. It's not just how we work, but in how we think, how we form identity, and how we assign value to ourselves and others.
What became clear from Slater’s point of view is that this is not a marginal shift, but is a systemic redefinition of human relevance in the workforce
As A.I. systems begin to outperform humans in cognitive tasks, analysis, writing, and decision support. Large segments of the workforce may struggle to remain economically relevant. But the deeper disruption is not just about jobs. It’s about what happens to identity and social stability when work, long a primary source of meaning, status, and structure, begins to erode. It’s not just the jobs that are changing; it’s the sense of purpose that comes with them. I remember talking to a friend who recently lost his role to an automated system, and it wasn’t just the paycheck he missed; it was the routine, the camaraderie, and the feeling of being needed. All human behaviors that are alien to A.I..
The Convergence Of The Psychological And The Economic
Already happening inside organizations, fewer people are producing more, faster. Entry-level roles, historically the training ground for skill development, are thinning out. An increasing emphasis on output over process, and a reliance on A.I. Systems are quietly replacing the development of independent judgment. The disappearance of core functions across marketing, analysis, and legal drafting is occurring now, alongside fewer roles and higher output expectations.
This won’t happen all at once. It will emerge unevenly, first in knowledge work augmentation, then in role compression. And, eventually, in full task replacement across sectors where A.I. can outperform human cognition at scale.
The result is a workforce that may remain productive in the short term, but less capable, less differentiated, and more dependent over time. For those still employed, the pressure intensifies. The expectation becomes to operate at machine speed: constant responsiveness, compressed timelines, and minimal room for reflection. In practice, this reduces the ability to sustain deep focus without external tools, weakens original thinking, and shifts human contribution toward managing systems rather than developing insight.
In this environment, preserving human depth is not philosophical; it’s operational. Organizations that ignore it risk building faster systems on top of a cognitively and emotionally diminished workforce.
Slater asserts that A.I. doesn’t just transform the external environment; it begins to reorganize the internal one. It challenges “psychological integrity”– Carl Gustav Jung (The Collected Works of C. G. Jung: Volume 8; ¶425: "The Structure and Dynamics of the Psyche"): the capacity to maintain a coherent sense of self, grounded in reflection, experience, and internally generated meaning. It’s not just productivity at risk; it’s the human ability to generate judgment, identity, and purpose independent of systems.
We are not just scaling intelligence – we are deciding, in real time, what kind of humans that intelligence leaves behind.
A.I. acts as a force multiplier in an already strained environment. We are living in an attention-fractured culture where distraction is constant. A.I. accelerates this dynamic, making it easier to outsource thinking, faster to generate answers, and harder to remain engaged in the slower processes through which understanding and meaning are formed.
A workforce that loses depth doesn’t just suffer psychologically – it performs worse: less original thinking, weaker judgment, and increased dependency on the very systems replacing them.
If this continues at scale, the issue is not simply job loss, it’s unemployability. Entire categories of professionals may retain experience but lose economic relevance as systems outperform them across core tasks. Without meaningful pathways to contribute, the impact extends beyond the individual into broader social stability: disengagement, underemployment, and increasing strain on institutions built around the assumption of full workforce participation.
The Conversation Needs To Shift – A New Responsibility For Leaders.
An A.I. strategy cannot be measured solely by efficiency gains or cost reduction. It must also account for diminished independent thinking, skill development, attention, and the psychological structures that support identity and meaning. These are not soft variables; they are foundational to long-term human and organizational resilience.
Slater’s warning is not overly dramatic, but it is very clear: the real risk is not that machines become more like humans, but that humans begin to function more like machines, faster, more efficient, but less reflective, less grounded, and ultimately less capable of sustaining the systems we are building. The challenge ahead is to advance A.I. without eroding the psychological integrity that makes human contribution possible in the first place.
Looking ahead, I keep coming back to a question that’s been nagging at me since this seminar: How do we move forward with A.I. without losing the very things that make our work, and our lives, feel meaningful? I don’t have all the answers, but I know this is a conversation we need to be having, both in boardrooms and over coffee with friends, now.
Let’s continue the conversation. What are your thoughts?
Charlé-John Cafiero is the founder of CJC Strategists a strategic marketing and communications consultancy. With more than 35 years of experience in technology, communications innovation. He helps brands and organizations navigate disruptive shifts across media, advertising, public affairs, and digital transformation. Charlé-John draws on experience spanning multiple eras of technological disruption. Currently, he helps brands navigate the A.I. revolution. He focuses on developing brand reputation and creative strategy to position companies, as well as reshaping workforce relevance, human-machine collaboration, and ethical leadership.
# # #