These three pieces were written in the same week, though not planned as a series.

Each starts from the same observation: that AI changes the conditions around the work that matters most — not by replacing it, but by making it easier to skip.

They are meant to be read together.

On Judgment

I do not write this against artificial intelligence. Nor do I write it to protect rituals, tools, or professional territory.

I write it to defend something more fundamental: the conditions under which good judgment remains possible.

Lone fisherman silhouetted against a beach sunset Photo by Juan Carlos Aragonés

Design has never been merely the production of outputs. It has never been only screens, flows, components, copy, or code. Design has always involved something slower and more demanding: understanding context, asking better questions, making reasons visible, exposing trade-offs, and accepting responsibility for consequences.

Now that generating artifacts is becoming easier, faster, and cheaper, the value of that work does not disappear. But it does move.

Execution alone matters less when execution can be simulated so easily. What matters more is the ability to determine what should exist, why it should exist, who it serves, and what risks come with bringing it into the world.

But this is not only a question about design. It is a broader question about decision-making itself.

AI lowers the cost of producing answers. It also lowers the friction that once forced people to test those answers more carefully. And when friction disappears, something essential can disappear with it: the pauses, tensions, disagreements, and challenges through which better thinking is often formed.

One of AI's quietest risks is not only automation, but the weakening of dissent.

When the same person can frame the problem, generate the response, refine the argument, and reinforce it through AI in one seamless motion, weak reasoning can begin to look complete. Fluency can begin to resemble understanding. Coherence can begin to stand in for rigor. And the absence of challenge can begin to feel like clarity.

This is not a failure of the tool alone. It is a failure in how easily we may choose to trust what is polished, fast, and persuasive.

The threat is not that a machine can produce screens, copy, video, or code. The threat is accepting that this is enough.

Because when we confuse speed with thought, polish with value, or fluency with judgment, we do not become more capable. We become less accountable for what we make, what we recommend, and what we allow others to trust.

That is why the question is not whether AI should be used. It already is, and it will be.

The real question is what we allow it to weaken.

If it weakens context, we will make shallower decisions. If it weakens dissent, we will test ideas less seriously. If it weakens responsibility, we will become more comfortable with consequences we have not fully examined. If it weakens judgment, then no increase in output will compensate for what is lost.

So this is not a plea for nostalgia, nor a defense of craft for its own sake.

It is a defense of the human work that remains indispensable when making becomes easy: the work of questioning, discerning, challenging, explaining, and deciding with care.

A responsible use of AI should not erase that work. It should demand more of it.

Because now, precisely when producing has become easier, the discipline of thinking well matters more than ever.

On Collaboration

AI is expanding individual autonomy at work in ways that would have seemed improbable not long ago.

People can now move across boundaries of role, discipline, and execution with far less friction than before. They can explore, decide, make, and refine on their own with a speed and range that once required much more coordination.

In many ways, this is extraordinary.

Coastal sign submerged in still water at dusk Photo by Juan Carlos Aragonés

It gives people more autonomy. It makes experimentation easier. It allows ideas to travel further before they depend on others. It opens possibilities once limited by specialization, access, or time.

This is not a problem to resist. It is a transformation to understand.

Because as friction disappears, something else can disappear with it.

Not only delay, but consultation. Not only dependency, but exchange. Not only coordination, but the conversations through which work is challenged, aligned, and made stronger.

The question is not whether roles should evolve. They should.

The question is what happens when expanded individual capability begins to push aside the human behaviors collaboration depends on: asking, aligning, informing, contrasting, and involving others before decisions take shape.

What AI makes easier is not only execution. It is unilateral action.

A person can now frame a problem, generate a direction, refine a rationale, and produce a valid output without ever pausing at the edge of another person's perspective. The result may look complete. It may even look thoughtful. But coherence is not the same as shared understanding.

That distinction matters.

Collaboration was never valuable only because work had to be divided. It was valuable because other people bring other contexts, judgments, histories, and perspectives. They do not simply add labor. They add difference. And very often, that difference is what stops weak decisions from appearing sound too early.

AI does not remove the need for human collaboration. It changes the conditions around it.

Before, collaboration was often enforced by dependency. Now it will have to be chosen more deliberately.

When individuals can operate across multiple functions by default, the risk is not only role confusion. The risk is the quiet erosion of the habits that make collaboration real: communication, consultation, visibility, and mutual awareness before outputs are already formed.

This is not an argument for preserving old boundaries for their own sake. Some needed to loosen. Some deserved to disappear.

But as boundaries become more fluid, human collaboration cannot become more optional.

If AI expands what a person can do, the responsibility to collaborate well does not shrink. It grows.

Not because every action requires ceremony. Not because every step needs consensus. But because good work still depends on something no tool can automate: recognizing when another human perspective is not friction to avoid, but part of what makes the work worth trusting.

So this is not a plea for older workflows, nor a defense of friction for its own sake.

It is a defense of the human exchanges that keep work from collapsing into isolated momentum. The future of work will not be defined only by how much more a person can do alone.

It will be defined by whether, in gaining that power, we forget how to work well together.

On Review

There is a form of exhaustion growing around AI at work, and I do not think we are naming it clearly enough.

Not only because AI makes it easier for people to act beyond their depth, but because it also makes it easier to produce work that someone else must later absorb, review, or correct.

That cost is easy to miss.

Stone seafront railing before a stormy sea Photo by Juan Carlos Aragonés

We still call it review. But more and more it feels like postview: attention that arrives after the work is already moving, after direction is set, after decisions are already leaning somewhere. Not a check before the work takes shape — a repair after it does. The difference matters because postview cannot prevent what review could have caught. By the time the expert sees it, the cost is already paid. Someone else is just being asked to absorb it.

There is a difference between work that becomes stronger through collaboration and work that creates avoidable burden because too little care was present at the beginning. What concerns me is the growth of that second kind: the wear of correcting what could have been prevented, of having expertise used too late — not to guide the work, but to repair it.

That is displaced effort.

Some people get the speed and the initiative. Others inherit the burden of making the result trustworthy.

That changes the meaning of expertise.

Experts are no longer brought in to shape work early or prevent mistakes at the source. More and more, they are asked to validate or clean up work that was set in motion without enough context in the first place.

Work moves. Output accumulates. Judgment arrives late.

What follows rarely looks dramatic. It looks like repeated cleanup, recurring drag, and expertise spent too late to shape the work — only to contain it. Over time, that wear stops being incidental. It becomes part of the system.

The issue is not that AI helps more people do more things. The issue is what happens when that gain in capability weakens consultation, pushes expertise downstream, and turns review into repair.

The question is not only what the work costs to produce. It is who pays to make it right.

Previous

From Chaos to Clarity: Rebuilding Lexicon for Efficiency

More posts by David Aragonés

image for Introducción a Lexicon
/images/headshots/aragones-david-h.jpg

Introducción a Lexicon

1 Min Read

Design.Liferay

Part of Liferay, IncCode/Content Licenses