What if we’re no longer needed?

[firma_autor]
Hombre pequeño observa a una figura azul gigante sin rostro, en una ilustración que representa la creciente irrelevancia humana frente a la inteligencia artificial.

Table of Contents

There’s a question that keeps creeping in every time a new tool drops, another task gets automated, or a human role quietly vanishes under a shiny predictive model:

What happens when intelligence no longer needs to be human?

This isn’t a philosophical mind game. It’s a growing crack in the social, educational, and economic foundations we stand on. And no — there’s no feel-good answer wrapped in a “reinvent yourself” narrative or some startup-y call to learn Python in 48 hours.

I ran into a piece in The Guardian this week that didn’t sugarcoat it: Can we stop AI making humans obsolete? It’s not sci-fi. It’s an overdue question dressed as analysis.

Efficiency has a body count — it’s just not always on the slides

The story we keep hearing goes like this: AI is here to help you. To free your time. To “elevate” your role. But let’s be honest — what we call optimization often means cutting people loose.

Entire professions are turning into dashboards. Teams become “units.” And we’re applauding it, calling it progress.

That “assistant” you loved last year? It’s now a filter. And the tool that used to help you do your job better? It’s starting to wonder if you are the bottleneck.

So… who decides if you still matter?

Maybe the problem isn’t AI itself, but the logic it’s deployed with. We keep acting like this is about reskilling. About adaptability. But what if it’s not about how willing or talented you are?

What if the real question is: whose interests decide whether you stay in the system or get quietly automated out of it?

Over the last five years, I’ve been deeply involved in AI training for decision-makers — with a strong focus on ethics and human-centered design. And here’s what I’ve noticed: everyone loves to talk about keeping the human “at the center.” But in practice? What I mostly see is layoffs.

How many people have the MANGFAN giants (Meta, Apple, Nvidia, Google, Microsoft, Netflix, Amazon) laid off in the last three years?

Without even counting the recent market meltdown triggered by the bronzed man in question — a hit that pummeled tech stocks across the board. So you have to ask: were these truly “necessary layoffs”… or just the kind that make shareholders smile?

The truth is, the historic surge in these companies’ valuations didn’t come after the layoffs. It happened right in the middle of them.

And if the leaders of the tech world are going all in on that logic, what do you think your average Mr. Burns with a spreadsheet is going to do?

Five questions we should be asking before it’s too late

  1. Are we really okay defining human worth by productivity alone?
  2. Who’s making the big tech decisions — and whose interests are they protecting?
  3. Is this non-stop “upgrade or die” mindset actually helpful — or just profitable for someone else?
  4. Do we have real communities… or just followers?
  5. What parts of humanity are still worth protecting — even if they’re not profitable?

The moment humans stopped being the OS

I recently worked with a public institution trying to “optimize operations with AI.” Which, of course, translated into downsizing.

Except… when the people left, so did all the nuance, judgment, and invisible glue that kept the machine running.

Three months later, they re-hired some of them. Not out of nostalgia. Out of necessity. Because things stopped working.

Not everything that looks “redundant” is actually dispensable.

Maybe it’s not about what’s coming — but what we’re letting happen

This isn’t a call to smash the machines or cling to the past. It’s an invitation to ask better questions — before someone else writes you out of the equation.

Do we want AI to do what’s possible… or also what’s desirable?

And if we’re not part of the design, maybe — just maybe — you’re already part of the discard pile.