MY REFLECTIONS AND ARTICLES IN ENGLISH

CHRONICLE OF AN ANNOUNCED COGNITIVE DEATH – WHEN WE OUTSOURCE THE FUTURE AND CALL IT PROGRESS

There is a type of death that does not announce its arrival with sirens or diagnoses. It settles in silently, disguised as convenience, celebrated as evolution, naturalized as inevitable.
When you finally realize you have died, you no longer possess the cognitive instruments to comprehend what you have lost — because those very instruments have been outsourced.
I am not speaking of biological death. I speak of death as a thinking subject. Of the gradual transformation from one who questions into one who merely executes. From one who synthesizes into one who consumes ready-made syntheses. From one who creates possibilities into one who processes probabilities calculated by others.
This is the chronicle of that death. Announced long ago by those who were paying attention. Ignored by the majority who mistook automation for liberation.

I. The Precise Moment of Capture
There is an exact moment when the relationship with technology inverts polarity.
You believe you are using a tool, when in reality you are already being used by it — not as a metaphor, but as operational fuel.
It happens like this: you delegate your first decision to an algorithm. Small, apparently harmless. Which restaurant to choose. Which route to take. Which series to watch. The system processes your patterns, your declared and implicit preferences, and offers an optimized response.
You accept. It works. It saves time and mental energy.
Your brain registers: “this is efficient.”
Neurons that would have been activated in the decision-making process remain dormant. Circuits of weighing, comparison, synthesis — unused. And the brain, pragmatic in its energy economy, begins a silent recalibration: if a certain function is no longer necessary, why maintain it at a high metabolic cost?
The next time a similar decision arises, the impulse to delegate arrives faster. Resistance diminishes.
Until one day you no longer even consider deciding alone — outsourcing has become neural default.
But here is what no one explained to you about this process: every time you delegate, you do not merely save energy. You feed a system that is learning to model you.
Your data — choices, hesitations, patterns — become material that trains the algorithm to predict you better than you predict yourself.
The machine does not improve on its own. It improves through you. Using your activity as fuel for its own technical individuation, while you, paradoxically, become less individuated — more statistical, more predictable, more generic.

II. The Silent Ontological Inversion
There have always been tools that extended the human capacity to transform the world.
The chipped stone amplified the hand’s strength. The wheel multiplied the reach of movement. The plow potentiated cultivation. The loom accelerated fabric creation.
Then came tools that extended the capacity to process and transmit thought.
Writing allowed fixing the word in time, freeing human memory from the task of preserving everything orally. The printing press multiplied knowledge, democratizing access to what was once the privilege of a few scribes. The telegraph compressed distance, turning communication that took weeks into instantaneous messages.
Each technological leap altered the rhythm of cognitive life.
The newspaper created the urgency of the everyday — news that ages in hours. Radio brought simultaneity: millions hearing the same voice at the same time, constructing a shared present. Television added moving images, colonizing not only the ear but also the gaze. The personal computer placed information processing in the hands of individuals, not just institutions.
The internet connected everything to everything, creating a network where any node accesses any information instantly. The smartphone made this connection permanent — you are never offline, never inaccessible, never truly alone with your own thought.
But all these technologies — from writing to the smartphone — maintained a fundamental hierarchy: the human used the tool to realize a previously formed intention.
You decided what to write, which book to read, which news to seek, which program to watch, which information to process. The tool extended your capacity, accelerated your reach, amplified your power — but you remained the subject who formulates intention, who chooses direction, who synthesizes results.
What changes with algorithmic systems is not merely another leap in speed or scale.
It is a fundamental ontological inversion.
For the first time in the history of human tools, the instrument does not merely extend capacity — it replaces the cognitive process.
When you ask AI to write an email, you are not amplifying your communication skill as writing once did. You are eliminating the work of articulating thought in language.
When you accept an automatically generated summary, you are not optimizing reading as a table of contents once did. You are abdicating active synthesis that builds understanding.
The telegraph accelerated the transmission of a message you formulated. AI formulates the message you validate.
The newspaper brought information you synthesized. AI synthesizes information you consume.
The computer processed data according to logic you programmed. AI programs logic that processes you.
The difference is subtle but abyssal: all previous technologies extended the subject. This one replaces the subject.
And replacement without deep understanding is an illusion of control.
You think you are supervising the AI’s output. In reality, you are merely agreeing with what you no longer have the genuine capacity to evaluate — because the circuits that would allow critical judgment are being atrophied by the very use of the tool.
There is an instructive historical precedent.
When calculators became ubiquitous, an entire generation lost the capacity for complex mental arithmetic. Society accepted the trade-off: we gave up a specific skill in exchange for speed and precision. Concerning? Perhaps. But arithmetic is a replaceable technical skill.
Now we are making the same trade-off with capacities that define us as autonomous subjects: critical thinking. Conceptual synthesis. Articulation of complex ideas. Formulation of original questions.
These are not peripheral technical skills.
They are precisely what constitutes us as thinkers rather than programmed executors.
And we are outsourcing all of them — not by coercion, but by convenience.
Because, unlike the printing press that democratized knowledge or the computer that decentralized processing, AI offers something irresistible: the promise of thinking without the effort of thinking.
And who refuses this in a culture that venerates efficiency as the supreme value?

III. Parasitized Temporality
There is a type of time that is not measured in hours or minutes, but in density of presence.
Time in which you are genuinely inhabiting the moment — not projected into the next commitment, not fragmented across seven screens, not compressed by the artificial urgency of immediate response.
This time scarcely exists anymore.
It has been replaced by a peculiar temporality: a present colonized by gratification loops that preempt the future. Every notification you check, every summary you consume, every decision you outsource — all operating in a dopamine economy that privileges immediate reward over slow construction.
Your ventral striatum, the brain structure that processes reward, learns quickly: rapid synthesis offers more predictable gratification than prolonged effort of deep reading. Algorithmic response delivers more immediate satisfaction than the laborious process of forming your own thought.
Gradually, imperceptibly, neural architecture recalibrates.
The hippocampus — fundamental for memory consolidation and complex synthesis — becomes less activated. The ventral prefrontal cortex — crucial for executive control and abstract thinking — loses tone.
We are not merely changing habits. We are remodeling the biological substrate that enables specific types of thought.
And the most perverse aspect: this reconfiguration is experienced as liberation.
We feel relief in outsourcing. Pleasure in saving effort. Satisfaction in “optimizing” cognition.
We do not notice that we are trading the capacity to inhabit complexity for the skill to process superficiality at high speed.
The time we “save” does not become time to think better.
It becomes time to consume more stimuli that reinforce the same dopamine economy that captured us.
It is a self-feeding loop. And we call it productivity.

IV. The Illusion of Personalization
The more “personalized” the algorithm becomes, the more generic you become.
It seems contradictory, but it is the precise mechanics of how systems work.
Algorithmic personalization does not singularize you. It segments you into increasingly refined statistical clusters. You are not treated as a unique individual with an irrepeatable trajectory — you are classified as a member of a category: “users who like X also like Y,” “people with your profile tend to Z.”
The “personalization” you experience is optimization of predictability.
The system learns your patterns to offer exactly what you are likely to want — eliminating surprise, excluding dissonance, removing encounter with genuine otherness.
You become a prisoner of a statistical version of yourself.
And worse: you begin to recognize yourself in that version. Your real preferences — complex, contradictory, in permanent reformulation — are gradually replaced by calculated preferences the system tells you that you have.
You no longer choose music that challenges you. You listen to what the algorithm predicted you will like.
You do not read texts that question assumptions. You consume content that confirms biases.
You do not encounter people who think differently. You interact with those the system determined to be “relevant to you.”
Your experience of the world becomes narrower, more predictable, more controlled — while you believe you are more connected, more informed, more free.
It is a form of voluntary servitude that does not even recognize itself as such.
Because it was sold as convenience.
And who refuses convenience in a culture that venerates it as the supreme value?

V. The Collapse of Intersubjectivity
Human relations have always been a complex negotiation between irreducible subjectivities.
You never directly access the other’s consciousness — you only interpret signs, infer intentions, construct an always imperfect model of what the other thinks and feels.
This fundamental incompleteness was not a flaw.
It was precisely what made the relation alive — space for doubt, productive misunderstanding, permanent need for adjustment, inherent vulnerability in the encounter with otherness.
Algorithmic systems promise to eliminate this incompleteness.
Matchmaking that “finds the perfect person for you.” Sentiment analysis that “reveals” what the other really meant. Behavioral prediction that anticipates reactions.
You think this facilitates relation. In reality, it is replacing genuine encounter with the other by interaction with an optimized model the system constructed.
It is no longer the person in their irreducible opacity that you encounter.
It is a statistical avatar — a predictable, controlled version stripped of everything that makes human encounter simultaneously risky and transformative.
And when all our interactions are mediated by systems that filter, order, predict — we lose a fundamental capacity: to sustain presence with the other without the need for total control.
Empathy, in the deep sense, is not agreement. It is the capacity to inhabit a radically different perspective without needing to reduce it to the familiar.
But this requires tolerance for ambiguity, acceptance of partial incomprehension, patience with the slow process of mutual understanding.
All of this is eliminated when the algorithm mediates.
We are left with a simulacrum of relation — efficient, predictable, safe.
And mortally empty.

VI. The Elite That Does Not Perceive Its Own Obsolescence
There is a comforting illusion among those who program, design systems, dominate technical infrastructure: they believe they are on the safe side of automation.
Executors will be replaced, yes. But thinkers? Creators? Strategists? These will remain essential.
Except they will not.
Because the frontier of automation is not fixed. It advances precisely into domains that seemed intrinsically human.
Generative AI already automates programming. Design systems already replace information architects. Strategic analysis algorithms already compete with elite consultants.
The difference between “cognitive elite” and “executor mass” is not ontological.
It is temporal: the elite is simply those who have not yet been automated.
And the speed of advance is accelerating exponentially.
So when you and I, sophisticated executives who use AI to “increase productivity,” outsource data analysis, strategic writing delegation, report synthesis — we are not becoming more efficient.
We are training our own replacements.
We are providing data that teaches the system to do exactly what we do — only faster, cheaper, without vacations, without ego, without demand for recognition.
The only real protection is not specific technical knowledge — that is precisely the easiest to code.
It is the capacity to think what has not yet been thought. To connect domains no one has connected. To question premises everyone accepts. To inhabit contradiction without prematurely resolving it.
But these capacities do not develop through efficiency.
They develop through slow, inefficient, painful processes that we are systematically eliminating.

VII. The Discomfort That Built Humanity
Genuine thinking hurts.
It is not a metaphor. It is a precise phenomenological description of how it feels to encounter the limit of one’s own understanding, to sustain contradiction without easy resolution, to inhabit doubt without haste to eliminate it.
This discomfort was not an accident of the human condition.
It was precisely what made us capable of genuine evolution — not passive adaptation, but active transformation through confronting what we do not understand.
Every great conceptual rupture in human history came from someone who sustained prolonged cognitive discomfort. Who resisted the temptation of quick answers. Who inhabited the void between the paradigm that no longer works and the paradigm that does not yet exist.
Einstein sustaining the contradiction between Newtonian mechanics and evidence that did not fit. Darwin living for years with observations that challenged creationism without yet having a complete alternative theory. Freud tolerating the strangeness of the unconscious when it would have been easier to ignore it.
All of them did something contemporary culture considers intolerable: they wasted time. They thought slowly. They erred a lot. They doubted constantly.
And they produced ruptures that redefined reality.
Now observe what we do when we encounter complexity: we seek summaries.
When we encounter contradiction: we ask for definitive answers.
When we encounter doubt: we outsource to those who “know.”
We eliminate discomfort. And with it, we eliminate the possibility of genuinely thinking anew.
Because the new does not come from processing better what already exists. It comes from sustaining the void where the existing no longer works and the new has not yet arrived.
And we, conditioned by an economy of immediate gratification, no longer tolerate this void.
So we fill it with algorithmic answers that are, at best, statistical extrapolation of the past.
We outsource precisely what would make us capable of an unpredictable future.

VIII. The Choice That Still Remains
This is not a diatribe against technology. It would be naive to believe we can or should “return” to a pre-digital state.
It is an alert about the choice we face in every interaction with algorithmic systems: use the tool or be used by it.
The difference is not in the tool. It is in us.
When we ask AI to summarize a text: are we saving time to think more deeply about the content? Or are we avoiding the cognitive work of active synthesis?
When we delegate a decision to an algorithm: are we freeing attention for more complex issues? Or are we atrophying decision-making capacity through disuse?
When we accept a personalized recommendation: are we discovering genuinely new perspectives? Or are we being led by a statistical version of ourselves?
The honest answer to these questions defines the trajectory.
Because none of us has a neutral path.
Every choice of ours reconditions neural circuits. Every outsourcing weakens cognitive muscle. Every accepted convenience reduces tolerance for necessary discomfort.
Gradually, imperceptibly, we become either more capable — people who use algorithmic speed to think more deeply — or more dependent — people who can no longer think without external prosthesis.
And the division between these two groups will not be democratic.
It will be abyssal.
On one side: a minority that maintains the capacity for synthesis, questioning, genuine creation. Who uses AI tactically but preserves cognitive autonomy.
On the other: a majority that outsourced deep thinking. Who executes efficiently but does not question premises. Who consumes syntheses but does not synthesize.
The first is irreplaceable. The second is redundant.
And the transition between the two is easier than we imagine — just keep choosing convenience over discomfort.

IX. What It Means to Be Cognitively Alive
In the end, the question is not about AI, algorithms, automation.
It is about what it means to be alive as a thinking subject.
Cognitive life is not executing processes efficiently. It is the capacity to genuinely surprise oneself. To encounter something that destabilizes certainties. To formulate a question no one has asked. To inhabit not-knowing without urgency to resolve it.
It is the power to look at complexity and not feel the immediate compulsion to reduce it.
It is sustaining contradiction without needing to eliminate it.
It is erring productively instead of erring mechanically.
All of this requires time that is not productive in the economic sense. It requires discomfort that does not generate immediate gratification. It requires vulnerability that is not resolved with an algorithmic answer.
Do you still do this?
Or have you outsourced so much that you have forgotten how to do it?
Because this capacity — to think slowly, err frequently, doubt deeply — does not stay stored waiting for you to decide to use it again.
It atrophies.
And when you finally need it — when the algorithm has no answer, when the situation is genuinely new, when the only way out is to think the unthinkable — you may discover that you can no longer do it.
Not because the capacity disappeared biologically.
But because you spent so much time avoiding discomfort that you lost tolerance for it.
And without tolerance for cognitive discomfort, there is no deep thinking.
There is only efficient execution of patterns programmed by others.

Epilogue: The Death We Chose
This is a chronicle of an announced death.
Not by inevitable destiny. But by everyday choices we make without perceiving their cumulative weight.
Every time we outsource thinking, we die a little. Not dramatically. Not visibly. But real and irreversibly.
And one day, we look in the mirror and do not recognize who is there.
Because who we were — someone capable of inhabiting complexity, sustaining doubt, creating genuinely new — died.
It was replaced by an efficient executor of optimized patterns.
The question is not whether this will happen to humanity in general.
It is whether it will happen to you. To me. To us.
And the answer is being written now. In every choice we make. In every discomfort we avoid. In every thought we outsource.
Cognitive death is not a future event. It is a present process.
That we can interrupt.
But here is what no one wants to admit: perhaps it is already too late for some of us.
Perhaps you have already lost the capacity you judge yourself still to have.
Perhaps I have too.
The question is not whether you can still think.
It is: can you still not know?

#artificialintelligence #obsolescence #thinking #autonomy #future #discomfort #transformation #consciousness #humanity #choice #marcellodesouza #marcellodesouzaoficial #coachingevoce

Want to go deeper? Access my blog and discover hundreds of publications on human and organizational cognitive-behavioral development and relations that truly transform. 👉 www.marcellodesouza.com.br

Deixe uma resposta