MY REFLECTIONS AND ARTICLES IN ENGLISH

WHAT WE BECOME WHEN WE STOP THINKING

What happens when an entire civilization outsources thinking? Discover how technology reveals—and accelerates—the greatest cognitive crisis in human history. – Marcello de Souza
________________________________________
In the early hours of an ordinary Tuesday in February 2026, an artificial intelligence finished writing the code that would give rise to its next version. There was no ceremony. There was no astonishment. The news circulated between two headlines about celebrities and a thirty-second video about protein recipes, and disappeared. On the same day, in a country in the northern hemisphere, a teenager who hadn’t spoken to anyone in months entered a school and opened fire on strangers. In the same week, the last treaty limiting the greatest destructive arsenal ever created by the species expired—and the feed kept scrolling.
In some office in any metropolis, an executive asked an algorithm to summarize in three bullets the two-hundred-page report explaining why his company was losing relevance—and made a million-dollar decision based on that summary. On another floor of the same building, a human resources director signed the dismissal of two hundred employees with the justification that “artificial intelligence will assume these functions”—without any AI system even being in the testing phase to execute any of them.
None of these events is disconnected from the others.
All are symptoms of the same fracture. A fracture that appears in no examination, that enters no quarterly report, that generates no trending topic—precisely because those who should diagnose it are already too anesthetized to perceive it.
The fracture is cognitive. And it is voluntary.

There is a kind of death that appears in no obituary. It has no date, leaves no body, provokes no funeral. It is the death of thought—and it happens every day, in silence, before an illuminated screen, while fingers slide and the brain obeys.
We are not talking about ignorance. Ignorance has always existed and, in its own way, always had the honesty to recognize itself as a gap. What happens now is of another nature—more sophisticated, more seductive, infinitely more dangerous. We are before the first civilization that has unrestricted access to the accumulated knowledge of the species and that, paradoxically, thinks less than any previous generation. Not because content is lacking. Not because stimulus is lacking. Because shortcuts abound. Because cognitive laziness has disguised itself as efficiency. Because calculation has taken the place of reflection—and almost no one noticed that this is not the same thing.
There is an abyss between processing data and thinking. Processing data is sequencing, categorizing, returning answers within predictable patterns. Thinking is something else. Thinking requires doubt, discomfort, contradiction, rupture. It requires enduring the void that precedes every genuine idea. It requires remaining in not-knowing—there, in that territory without map—for the necessary time for something truly new to form. It happens that not-knowing has become intolerable. The pause has become pathology. Silence has become threat. And so we run to the instant answer, to the algorithm that has already chewed the conclusion, to the machine that returns in seconds what would take us hours to build—hours that, precisely because they exist, would transform us in the process.
Because it is in the time of elaboration that thought becomes flesh. It is in the slowness of construction that something within us reorganizes, expands, recognizes itself. The instant answer delivers the result—delivers the final product. What it steals is the process. And without process, there is no transformation. There is only consumption.
Think of that executive from the beginning of this text. He is not incompetent. He is probably brilliant. Educated in the best schools, fluent in data, surrounded by cutting-edge tools. Everything that contemporary civilization could offer a mind, he has. What he lacks is exactly what no tool can give: the habit of sitting with complexity, of feeling, of reflecting, of reviewing his own trajectory, of resisting the impulse to simplify before understanding—and, above all, of knowing which questions need to be asked before going out looking for answers. The spreadsheet does not ask this question. The algorithm does not ask this question. Only a present, whole mind, willing to inhabit discomfort, asks this question. He made a decision based on three bullets. What was on page one hundred and forty-seven—that nuance that contradicted the general conclusion, that detail that required a second reading, that uncomfortable data that asked for reflection—simply disappeared. Not because the machine erred. Because he asked it to summarize. And summarizing, when done without criteria, is the elegant name for self-mutilation.
Now think of that human resources director. He is also not incompetent. But the decision he made belongs to a still more disturbing cognitive category: that of empty anticipation. He did not fire because artificial intelligence replaced those people’s work. He fired because he believed it would replace it. He acted upon a promise—not upon a reality. Researchers who studied this phenomenon on a global scale found something that should alarm us: the overwhelming majority of layoffs attributed to AI do not result from effective automation, but from the expectation of automation. They are anticipatory decisions. They are bets. They are the corporate equivalent of selling the house before verifying whether the new address exists.
And here something is revealed that no trends report has dared to name: what is at stake is not a technological revolution—it is an epidemic of intellectual cowardice disguised as strategic vision. Firing because “AI will take over” is cognitively easier than facing the questions that really matter: what do we need to redesign in our processes? What competencies do we need to develop in our people? What kind of intelligence—human, artificial, or the integration between both—does this specific challenge demand? These questions take time. They are complex. They do not fit in three bullets. And so they are ignored in favor of the most seductive narrative shortcut of the moment: “AI solves.”
It does not solve. And the evidence is already there. Companies that fired entire teams to replace them with automated systems had to rehire hastily, in silence, when they discovered that AI does not operate autonomously. Consultancies that track the phenomenon project that more than half of these layoffs will be silently reversed—because the cost of discovering reality after having acted upon fantasy is always higher than the cost of thinking before acting. There is a technical name for this practice: AI-washing—the art of attributing financial decisions to the technological narrative to appear innovative before the market while masking management errors.
But there is something deeper in this phenomenon—and it is here that the question transcends management and enters the territory of human behavior in its most revealing dimension.
These anticipatory layoffs are not merely isolated decisions by executives. They are symptoms of something that operates in the underground of organizations and societies: decisional mimetism. None of these leaders arrived at the conclusion to fire because they analyzed, with rigor and depth, the real capacity of AI to replace specific functions in their operation. Most fired because others fired. Because CEOs of larger companies announced cuts, because the dominant narrative declared that whoever does not “adopt AI aggressively” will be left behind, because the reputational cost of appearing slow surpassed the real cost of acting without foundation. This is not strategy. It is contagion. It is herd thought dressed in suit and tie—the same dynamic that moves financial bubbles, that feeds collective panics, that makes entire civilizations march in the wrong direction with absolute conviction. René Girard named this mechanism with surgical precision: mimetic desire—we do not desire what we evaluate, we desire what the other desires. And when this mechanism operates at the level of corporate decision, the result is not innovation—it is reactive imitation disguised as pioneering.
And the market, that imperfect but revealing thermometer, has begun to perceive. There was a time when announcing layoffs made stocks rise—it was read as “efficient management,” “focus on results.” That time is ending. Investors are beginning to distinguish the surgical cut from the blind amputation. And when this distinction consolidates, executives who acted by mimetism—and not by analysis—will find themselves in a position that no algorithm can solve: that of having destroyed irreplaceable human capital in the name of a promise they never evaluated with rigor.
We are consuming answers as one consumes fast food: we swallow without chewing, without savoring, without letting the organism recognize what it is receiving. And the result is the same—an illusory satiety that hides a hunger ever more profound. Hunger for meaning. Hunger for depth. Hunger for what no screen can deliver: the experience of having built something with one’s own cognitive hands.
And here the question ceases to be technological and becomes radically existential.
Technology was never the villain. Fire was not villain when it burned villages—it was the same substance that illuminated caves and cooked food. The wheel was not villain when it crushed—it was the same structure that transported. The question was never the tool. The question was always: who holds it? And, before that: what did that person make of themselves before holding it?
What we are witnessing is not the rise of the machines. It is the abdication of humans. A slow, comfortable, almost pleasurable renunciation. No one forced us to stop thinking. We chose. We chose every time we asked a machine to write what we could have written. Every time we accepted an algorithmic curation in place of our own investigation. Every time we preferred the packaged opinion to the exhausting work of building our own. Every time we fired human beings based on a projection that no one bothered to interrogate. The algorithm did not invade our brain—we opened the door, offered the sofa, and asked it to make itself at home.
And here is the paradox that should keep us awake at night: never have we produced so much knowledge about the functioning of the brain, about the mechanisms of behavior, about the architecture of emotions—and never have we known so little about what it means, in fact, to be human. We have accumulated data about synapses, mapped reward circuits, deciphered patterns of neural activation—with a precision that thirty years ago would have been science fiction. It happens that this fragmented knowledge, this knowing in slices, this hyperspecialization that dissects the subject into disconnected disciplinary pieces, has not made us more conscious. It has made us more efficient at describing parts without ever comprehending the whole.
We know everything about the pieces. We know almost nothing about what happens when they come together.
It is as if we had disassembled a watch with surgical perfection—every gear catalogued, every spring measured, every ruby documented—without ever having asked what time is. We confused disassembling with understanding. We confused describing with comprehending. We confused the sophistication of the instrument with the depth of the gaze.
This fragmentation did not remain confined to laboratories. It leaked. It contaminated the way we relate, how we make decisions, how we see the other and ourselves. A civilization that thinks in fragments acts in fragments. And when it acts in fragments, it produces consequences that seem inexplicable—when, in fact, they are perfectly logical within the logic of the shard.
Return to that teenager who entered the school. Researchers who studied cases like his—dozens, hundreds of them, spread across countries with radically different cultures—found something that should paralyze us: the common denominator is not mental illness, is not access to weapons, is not ideology. It is isolation. Deep and prolonged disconnection from any meaningful human bond. People surrounded by communication devices who have never been so alone. Hyperconnected bodies and souls entirely unmoored from their own species.
And this leads us to a dimension of the problem that is rarely named: cognitive atrophy is not merely individual—it is contagious. It is not just that each mind goes out alone; it is that extinguished minds feed the extinguishing of one another. The algorithm does not function in a vacuum; it operates within a network of mutual validation where shallow thoughts confirm shallow thoughts, where superficiality becomes social norm, where questioning has become socially more costly than agreeing. A person who stops thinking loses a capacity. But an entire community that stops thinking loses something infinitely more grave: it loses the mirror. It loses the possibility that someone, at some moment, will say what we need to hear and would never say to ourselves. When critical thought becomes the exception in a social network, in a team, in a family, in a culture—it ceases to be an individual faculty that some exercise and becomes a transgression that almost no one dares to commit.
This transforms cognitive atrophy into an epidemiological phenomenon. It is not metaphor. It is mechanism. Neuroscience together with social psychology demonstrates that our brains regulate each other—we are nervous systems in network, literally shaped by the minds with which we coexist. When the cognitive environment around us impoverishes, the cost of maintaining one’s own reflective density increases exponentially. To think well, in an ecosystem that rewards thinking fast, is not merely difficult—it is socially penalized. And it is exactly here that the spiral feeds back upon itself: the fewer people think, the more expensive it becomes to think, and the more expensive it becomes to think, the fewer people think.
This is not exception. It is amplification of what happens, on a smaller and less dramatic scale, in millions of lives every day. The young person who spends eight hours daily on social media and cannot sustain a ten-minute conversation looking into someone’s eyes. The adult who has five hundred contacts on the phone and no one to call at three in the morning when the floor collapses. The couple who sleep side by side, each immersed in their screen, without touching—neither in the physical sense, nor in the sense that really matters. The organization that replaces two hundred people with a technological promise and discovers, months later, that what those people did was irreproducible—because it was not merely work, it was tacit intelligence, it was situational judgment, it was human presence operating at a level that no algorithm can map, much less replace.
When fragmented thought governs nations, it reduces territories to assets, people to costs, cultures to commodities. When it governs organizations, it transforms strategy into sophisticated reaction, leadership into dashboard management, human capital into a line of expense. When it governs relationships, it transforms intimacy into algorithmic proximity and presence into performance. It is the same logic operating at different scales—the same cognitive barbarism applied to domains that demand exactly the opposite of barbarism: they demand integration, nuance, capacity to bear the weight of what does not fit in a spreadsheet.
Each of these events—the isolated teenager, the dissolved treaty, the worker discarded before the machine that would “replace” him even existed, the very machine that self-replicates—is consequence, not cause. They are fruits of minds that were trained to calculate without comprehending, to fragment without integrating, to react without reflecting. The world out there is the mirror of what happens here inside. And the mirror does not lie.
There are three movements that define the cognitive trajectory of any human being before technology—and that define, in the end, what they become.
The first is the beginning: how someone is born into thought. Every human being arrives in the world with voracious curiosity—a hunger to understand that asks no permission, waits for no curriculum, needs no incentive. A child asks “why?” with an insistence that would embarrass any philosopher. This original impulse, this thirst for comprehension, is the most precious cognitive capital that exists. And it is exactly this that screens begin to corrode before the child even learns to tie their shoes. When we replace exploration with ready content, when we trade the question for the packaged answer, when we shorten the circuit of discovery with instant stimuli, we are not educating—we are amputating. We are cutting the root before the tree has a chance to exist.
The second is the middle: how someone develops or atrophies throughout life. Here lies the crossroads. On one side, the path of construction: using technology as extension of a mind that already does the work of thinking—that questions, doubts, confronts, elaborates, integrates. A mind that arrives at the machine already knowing what to ask and, more importantly, already knowing to distrust the answer. On the other side, the path of outsourcing: delegating to the machine not merely the operational work—which is legitimate and intelligent—but the reflective work. Delegating the curation of what to read, what to think, what to feel. Delivering to the algorithm the most human function that exists: that of constructing meaning.
Observe the difference in practice. Two professionals receive the same news: their company is adopting generative AI. The first stops. Studies what the technology really does—and what it does not do. Maps which of their activities are automatable and which depend on judgment, intuition, relational context. Redesigns their role not as resistance to change, but as conscious integration. Becomes more valuable, not despite the technology, but through the way they position themselves before it. The second enters panic—or, worse, indifference. Accepts the dominant narrative (“AI will do everything”), does not investigate, does not question, does not reposition themselves. Waits for someone—the company, the market, destiny—to solve for them. One is building cognitive sovereignty. The other is outsourcing their own relevance.
Who travels the first path uses technology and becomes more. Who travels the second is used by technology and becomes less. Not less productive—less human.
The third is the end: what we become. And it is here that the bifurcation reveals itself with all its cruelty—not merely in personal life, but in the rooms where the destiny of thousands is decided.
On one side, the subject who integrated technology and consciousness—who knows how to use calculation without reducing themselves to it, who navigates the digital without losing ground in the real, who converses with machines without forgetting how to converse with people. On the other, the functional sleepwalker: one who moves, who produces, who consumes, who posts, who reacts—all without ever having stopped to ask themselves why. Who traverses entire life on the autopilot of stimuli and responses, confusing reaction with decision, impulse with will, agitation with life.
Now transport this bifurcation inside an organization. To the meeting room where an executive committee decides the strategy for the next five years. The real-time dashboard is there, luminous, seductive, with its impeccable graphs. The algorithmic synthesis has already digested terabytes of data and delivered the conclusions in palatable format. The machine did its part—with a competence that no isolated human could equal. The question is: does anyone in that room still ask why? Does anyone question what the dashboard does not show? Does anyone distrust what the synthesis excluded in synthesizing? Does anyone bear the discomfort of saying “I don’t know” before an entire council that expects certainties?
And here, the same pathology that generates the anticipatory layoffs reveals itself in its deep structure: it is not that these executives do not know how to think—it is that the environment in which they operate has made genuine thought an act of risk. To disagree with the algorithmic consensus has political cost. To ask for more time to analyze has reputational cost. To suggest that perhaps AI will not replace certain human functions has narrative cost—because the dominant narrative has already decided that “either you adopt or you are left behind.” And so, the mimetism that operates at the inter-corporate level reproduces itself at the intra-organizational level: no one dares to say the emperor is naked, because everyone is too busy applauding the invisible fabric.
What kind of strategy is born from minds that outsourced the elaboration of the why?
The answer is in the results we see every day. Companies that optimized everything—except the meaning of what they do. Organizations that measure everything—except what matters to measure. Leaderships that calculate risks with millimetric precision—and cannot perceive that the greatest risk is the collective atrophy of critical thought that no one puts in the spreadsheet. The same cognitive exile that corrodes the individual corrodes, silently, the collective intelligence of organizations that should be lighthouses. And when collective intelligence atrophies, what remains is not strategy—it is sophisticated reaction. It is high-performance autopilot. It is corporate sleepwalking with premium badge.
Functional sleepwalking is not fiction. It is the precise name for a civilization of awake bodies and asleep minds. It is the shopping center full of people who do not know why they are there. It is the social network with billions of users who never asked who they are outside the profile they display. It is the affective relationship maintained by algorithmic inertia—the app suggests, the body shows up, the soul is absent. It is the meeting room where ten brilliant minds agree with the algorithm’s conclusion without any of them having done the work of arriving, on their own, at a different conclusion. It is the executive committee that decides to eliminate two hundred positions because three competitors did the same—without anyone having asked whether the competitors knew what they were doing or whether they, too, were merely imitating whoever came before.
And the most disturbing: the functional sleepwalker does not recognize themselves as such. They think themselves active because they are busy. Think themselves informed because they consume content. Think themselves connected because they have followers. Think themselves strategic because they have data. Think themselves innovative because they fire in the name of technology. Think themselves alive because they breathe. The confusion between movement and life is the signature of our era.
It happens that there is another possibility. And it does not dwell in the future—it dwells in the decision that each individual makes today, now, in this exact moment in which they are reading these words.
The possibility of reconnecting. Reconnecting calculation to reflection. Information to meaning. The fragment to the whole. Speed to depth. The digital to the human. The technological promise to rigorous analysis of reality. The strategic decision to the thought that precedes it—and that no machine can do in our place. Reconnecting is not regressing—it is not rejecting technology, it is not fetishizing the past. Reconnecting is the most sophisticated cognitive operation that a human being can perform: it is joining what was separated without losing the specificity of each part. It is thinking with the power of the whole without abandoning the precision of the detail.
Reconnecting is what happens when someone uses an artificial intelligence to research—and then sits, alone, in silence, to think about what they found. When someone navigates networks to expand repertoire—and then turns everything off to feel what that provoked inside. When someone reads a news story about the world in flames and, instead of sharing with reflexive anger, stops and asks themselves: what in me—and in the way I think—also contributes to this fire? When a leader receives the recommendation to fire half the team “because AI will take over” and, instead of signing, stops and asks: take over what, exactly? When? With what evidence? And what do I lose—what I cannot measure and that perhaps I cannot recover—if I act upon a promise and not upon a reality?
Because the truth that no one wants to hear is this: the world out there is consequence of the world here inside. The decisions that perpetuate violence, that dissolve treaties, that isolate young people to the point of rupture, that discard people as spreadsheet items in the name of an automation that does not yet exist, that transform entire organizations into machines of reacting without thinking—these decisions are not born in a vacuum. They are born in minds. Minds that were trained—or that allowed themselves to be trained—to calculate without comprehending, to fragment without integrating, to react without reflecting. And they are born, above all, in cognitive ecosystems where these minds mutually confirm themselves in superficiality, where no one interrupts the circuit because interrupting costs too much.
To change the world without changing the mind that produces it is to change the frame of a picture that remains the same. And it is exactly this that we have been doing for decades: reforms, policies, technologies, programs, platforms—everything changes outside, nothing changes inside. The same fragmented logic produces the same fragmented consequences, now merely in high definition and with live streaming.
The revolution that matters is not technological. It is cognitive. And cognitive does not mean merely intellectual—it means behavioral, emotional, relational, existential. It means changing the way thought operates before changing what it produces. It means reconstructing, in the interior of each subject and in the fabric of the relations that constitute them, the capacity to integrate what was separated, to bear complexity without reducing it, to inhabit uncertainty without fleeing to the first available certainty—and to resist the contagion of shallow thought when it presents itself disguised as innovation, as efficiency, or as inevitability.
This is not naive optimism. Naivety would be to believe that more technology solves what technology alone will never solve. Naivety would be to expect that the machine will save us from the work that only we can do. Naivety would be to fire those who think with the promise that the machine will think in their place—and discover, too late, that thinking was the only thing the machine could never do. Genuine hope—the only one that deserves that name—is not passive. It is act. It is decision. It is the most radical exercise of intelligence that exists: to look into the abyss of what we have become and refuse to accept that this be the final destiny.
Because it is not.
We are the same species that invented language, music, mathematics, poetry, philosophy, medicine, art—each of these conquests was born of a mind that refused to accept the world as it was and dared to think it as it could be. Each was born of someone who bore the discomfort of not knowing, who inhabited the creative void the necessary time, who resisted the shortcut and chose the long path—because they knew, intuitively, that the long path was the only one that led somewhere worth going.
This capacity has not disappeared. It is asleep. It is buried under layers of stimulus, of noise, of manufactured urgency, of answers that arrive before the question. It is anesthetized by the comfort of not needing to think. And it is being actively weakened by an ecosystem that rewards speed over depth, consensus over truth, narrative over evidence.
To awaken it is not to turn off the screen. Turning off the screen is easy—and temporary. Anyone can turn off a screen and continue thinking exactly as the screen trained them to think. Likewise, any organization can adopt the most advanced technology on the planet and continue operating with the same reflective poverty as always—now merely faster and with prettier dashboards.
To awaken is something else. To awaken is to refuse the inertia of mimetic thought—that which imitates because imitating is safe. To awaken is to bear the social cost of asking “why?” when everyone around has already decided the “how.” To awaken is to look at artificial intelligence not as substitute nor as threat, but as mirror: a machine that processes, calculates, and optimizes with perfection, revealing, by contrast, that which no machine can do—to doubt itself, to integrate contradictions, to construct meaning from chaos, to recognize in the other a fellow being.
The question that remains—the only one that deserves to remain—is this:
What if the greatest act of resistance, today, were not to turn off the screen—but to reconnect what within us still resists being replaced?

#criticalthinking #cognitivedevelopment #humanintelligence #technologyandconsciousness #cognitiveatrophy #intellectualsovereignty #humanevolution #reconnectingthought #humanbehavior #cognitivetransformation #humanrelationships #selfknowledge #behavioraldevelopment #digitalworld #AIandhumanity #consciousleadership #organizationalculture #AIlayoffs #corporatemimetism #collectiveintelligence #AIwashing #organizationalcriticalthinking #marcellodesouza #marcellodesouzaofficial #coachingandyou

✦ If this text provoked something in you—a restlessness, a spark, a will to go deeper—I invite you to visit my blog: www.marcellodesouza.com.br. There you will find hundreds of publications on human cognitive behavioral development, organizational development, and on healthy and evolutionary human relationships. Each text is a door. What you do after crossing it is your decision.

Deixe uma resposta