Skip to main content

March 29, 2026

The future of work, and the future of us

Anthropic just interviewed 81k people from 159 countries on AI. The findings changed how I think about what we're becoming.

Robert Ta

Robert Ta

CEO & Co-Founder, Clarity

Anthropic 81k study

I just read how Anthropic interviewed 81,000 people across 159 countries about how AI changed their lives.

The largest qualitative study on AI ever conducted.

And the findings got me thinking about the future of work.

*  *  *

Are We Becoming Dogs?

Wolves vs Dogs study

Researchers at the Wolf Science Center in Vienna gave wolves and dogs the same puzzle:

A sealed container with food inside.

Figure out how to open it.

80%5%

Here’s what made the study famous:

The dogs stopped trying.

When the puzzle got hard, dogs turned around and looked at the nearest human.

Waiting for help.

Waiting for someone to solve it for them.

Sound familiar?

Fifteen thousand years of domestication bred that into them.

Wolves survive by figuring things out.

Dogs survive by being useful to someone who figures things out for them.

The trade worked. Dogs got food, shelter, belly rubs.

They just lost the ability to solve particular problems on their own.

I think about this study every time someone asks me about the future of work.

Anthropic’s research centered around: “How has AI actually changed your life?”

Not theoretically. Not what LinkedIn influencers predict. What 81,000 people experienced.

There were three key findings I’ll share, to make the argument that we should all be concerned with this question fundamentally:

Are we becoming dogs?
*  *  *

The same capability creates the benefit AND the harm.

0%

said AI made them better learners

0%

felt their cognitive abilities declining

Those groups overlap significantly. Some of the same people who learned faster also worried they were getting dumber.

Dogs didn’t get dumber overnight.

Generation by generation, they traded independent problem-solving for the comfort of looking back at a human.

Each generation was slightly more dependent, slightly less capable on its own.

Each generation also had a better life.

Both things were true.

I see this in my own work.

I build with Claude Code every day. I ship features in hours that would have taken weeks.

I also haven’t written code in eight months. Both things are true at the same time.

It’s slightly worrying.

The Anthropic study identified five of these “tensions” where benefit and harm coexist inside the same person.

Learning vs. cognitive atrophy.

Learning vs cognitive atrophy

Better decisions vs. unreliable outputs.

Better decisions vs unreliable outputs

Emotional support vs. dependency.

Emotional support vs dependency

Time saved vs. illusory productivity.

Time saved vs illusory productivity

Economic empowerment vs. displacement.

Economic empowerment vs displacement

Hope and alarm share a room inside every person using AI right now. We are changing, in real time.

*  *  *

Trading problems for problems

Self-employed

47%

Report economic benefits from AI

Employees

14%

Report economic benefits from AI

3.3x the benefit. Not a rounding error. A chasm.

Why?

Some extrapolated thinking:

Self-employed people choose their own tools, workflows, and priorities.

Employees wait for IT approval, company-wide rollouts, and manager buy-in.

By the time a Fortune 500 company finishes its “AI transformation strategy” slide decks, a freelancer has already rebuilt their entire practice.

I recently built my own custom automated GTM engine that self-improves, in about a week and a half as a side project.

I could definitely do CRO (conversion rate optimization) myself. But I’d rather not. I’d rather focus on higher leverage activities. It’s also terribly boring and energy draining for me.

But, it is quite critical and important to optimize our marketing funnel.

Over time, could I even do it anymore?

Or would I just look toward an AI for it and become hyper-dependent?

Perhaps that is the same as putting down one survival task (wolf) to do other tasks (dog).

*  *  *

The #1 aspiration is “professional excellence,” not “replace my job.”

0%

wanted to be better at work they already care about

Not “do my job for me.”

Not “let me work less.”

They wanted to be better at the work they already cared about.

The second most common aspiration?

Personal transformation at 13.7%. Growth. Emotional wellbeing. Becoming a better version of themselves.

The single largest study on AI aspirations found that people want alignment.

They want to become more of who they already are, not outsource who they need to be.

I’ve been building Clarity around this thesis the entire time. 81,000 strangers just validated it without knowing my company exists.

Amazing.

*  *  *

So who wins in the AI economy? The wolves or the dogs?

Every tension the study surfaced comes down to the same question: Does this person know what they want AI to do, or are they letting AI decide for them?

Are they solving the puzzle, or turning around to look at the nearest AI model for the answer?

Learning vs. cognitive atrophy: If you know what you want to learn, AI accelerates you.

If you’re just asking it to “help,” you atrophy.

You literally become dumber. You lose your critical reasoning skills.

How could we optimize and maintain our agency?

What’s the difference in practice?

*  *  *

Managing Agents as a skill

I believe that the more you reinforce the habit of clear writing instructions to your AI, the more you retain your actual critical thinking.

Don’t just chuck your half-assed asks to the AI, just like you wouldn’t (shouldn’t) do that to an employee. You would give clear instructions with the target outcome.

Managing agents requires knowing what good output looks like.

Knowing what good output looks like requires knowing what you’re building and why.

Anyone can vibe code these days.

But creating something of quality, still requires what it has always required:

Quality thinking and quality execution.

Which requires alignment.

The future of work isn’t human vs. AI.

It’s aligned humans with AI vs. everyone else.

We already see it. People are becoming dumber, losing their agency.

Dogs and wolves.

The question isn’t whether AI will change work. It already has.

*  *  *

Continue reading

Get the full newsletter, free.

Join founders and builders who read Self Aligned every week.

Continue reading

Get the full newsletter, free.

Join founders and builders who read Self Aligned every week.