Skip to main content

No Room to Hide

· 9 min read

AI is shrinking teams. Now every role has to earn its place.

Not because companies are failing, but because AI lets fewer people do more.

A startup that once needed 30 people to ship a product can now do it with 10. And this isn't just in startup-land. The best large companies are reorganizing around small, autonomous squads - 3 to 6 people - who take full ownership of a specific surface area. Think of a large org not as one team of 500, but as 80+ small teams, each operating with startup-like intensity and scope.

Bezos was onto something with the two-pizza rule: if a team can't be fed by two large pizzas, it's too big. That instinct is proving more right than ever, not as a management preference, but as a structural inevitability.

When teams shrink, roles blur. And that changes what it means to be valuable.

What small teams actually look like now

I see this every day at OpenBB.

Our previous community manager is currently the main maintainer behind our open source GitHub project with 62k+ GitHub stars. Our marketer uses GitHub Copilot to change copy directly on our website. Our designer codes her own wireframes. Our engineers build POCs in code rather than involving a product person. I use Claude Code to write features and get them to a reviewable state before handing them to our engineering team.

None of those additional tasks were in the JD of each of these individuals. They were hired for their core expertise and their mindset, then AI expanded what they could credibly do.

This is what a high-functioning small team looks like in 2026. Not people pretending to be experts in everything, but people with the right mentality taking ownership across boundaries that used to require separate hires.

In the past, small startups had a few people who were jack of all trades and then a few specialists in their domain. Getting those specialists was the hard part, and often the main reason startups raised more capital. Now, AI bridges enough of that specialist capability that a team of curious, driven generalists can validate an idea and ship a product without needing to hire for every gap.

And in larger companies, the shift is just as real. Historically, big orgs had massive teams where accountability was spread thin across dozens of people. There was always someone else in the chain of responsibility, which meant less incentive to get shit done. Some companies caught onto this early, Amazon's small team model being the obvious example, but now AI is accelerating the trend dramatically. The impact, in my opinion, will be much worse due to the previous overhire that has happened.

I mean, Calendly has 500 employees which is an order of magnitude over Cal.com. DocSend (prev acquisition) was 50 people, Papermark is again an order of magnitude lower. Notion vs Obsidian. Airtable vs NocoDB. Qualtrics vs Formbricks. Asana vs Linear. There are literally countless examples like this. Yes, these aren't identical products, but the directional point holds.

Sure, that previous headcount was used to "support growth" and not the other way around. But my point stands, you won't need anywhere near as much headcount to support growth.

AI supercharges the generalist

AI has fundamentally changed the cost of learning and executing.

A PM with no SQL experience can analyze database queries with any AI agent. A designer unfamiliar with frontend code can generate working React prototypes. A founder without legal training can draft and refine contracts on their own (not me though, I always use our GC).

What used to require a course, a consultant, or a specialist now takes curiosity and a good prompt (and even the "good" is becoming debatable, as agents become more agentic).

Jensen Huang put it well - AI is the easiest application in the world to use. ChatGPT grew to nearly a billion users practically overnight. And if you're not sure how to use it? You ask it how to use it.

No tool in history has ever had that property.

A single person on a small team can now credibly cover ground that would've required three hires two years ago. When every person on a 5-person squad can operate across 3-4 disciplines with AI assistance, you don't need 15 people. You need 5 versatile ones with the right tools.

This is the structural reason small teams are winning. Not just culture. Not just speed. But the raw math of what's possible when every team member is AI-augmented.

AI collapses the middle

If you're "somewhat good" at something, AI might already replace that edge.

Three years of casual SQL experience? An AI copilot can match that. Decent at writing marketing copy? So is every LLM. Know your way around a spreadsheet? So does anyone with a prompt.

The middle, where you're competent but not exceptional, is exactly where AI competes hardest.

This is the flip side of the previous point. The same AI that supercharges the generalist is what collapses the edge for people who are merely decent at specific tasks.

That means that the composition of teams is going to change, fast.

What's emerging is a bifurcation:

  • Generalists - empowered by AI, fast-moving, versatile, multi-disciplinary.
  • Specialists - narrowly focused, deeply committed to a subject.

And soft skills are, potentially, becoming more important than ever.

So what jobs are safe?

I like Jensen Huang's framework if your job is the task, AI will replace you. If your job is more than the task, it won't. A lawyer's job isn't reading documents, it's helping people. Reading documents is part of the job, not the job.

An analogy is if you're just chopping vegetables, you're replaceable. If you understand how to run a kitchen, from working with suppliers, serving customers, etc. - you're not.

The bar has risen for specialists

Being a specialist today means something deeper than it used to. It means pursuing a level of depth and nuance that the base AI models struggle to replicate.

Think of it like pursuing a PhD - not in the academic sense, but in the intensity of commitment.

A specialist in compiler optimization might spend years mastering edge-case memory management. An enterprise sales leader might have decades of intuition about how procurement cycles actually work inside Fortune 500 organizations - the politics, the timing, the unwritten rules that no model has been trained on. Someone in regulatory affairs might know a specific market's compliance landscape so intimately that they can spot a risk before it surfaces in any dataset.

In this case, the person who deeply understands a customer segment, who can navigate a specific regulatory landscape, who knows how to actually close a $500K enterprise deal - that's specialist depth too. And it's the kind of depth that becomes more valuable, not less, as AI handles everything around it.

False confidence

The danger of AI-augmented generalists is misplaced confidence. Which is also known as "Mount Stupid" - which describes the peak of false confidence where people with low competence or limited knowledge on a topic wrongly believe they are experts. It represents a state of overconfidence, which often precedes realizing how much there is to learn.

When a PM uses AI to write a SQL query, they get a result that looks right. It runs. It returns data. But they might not realize the query has a subtle join issue that inflates numbers by 15%. They don't know what they don't know, and the AI didn't flag it because it doesn't understand the business context.

This is exactly why certain domain specialists on small teams are a must.

I experience this firsthand weekly. I can be extremely productive with Claude Code. I can write features, refactor code, build things that work. But it usually cannot be merged as is. It always needs to be reviewed by our incredible infra engineers who have been working on our codebase for 4+ years, who know the ins and outs of the product, and who can see how a new feature will ripple through the system in ways I can't.

This is a feature of the model. The generalist gets things to 80%. The specialist takes it to 100% - and if you're curious and have the right mindset, you can work with the specialist to increase your own knowledge in that domain. Same as before, but now with AI as the speed multiplier.

The ideal small team

The ideal small team, whether inside a startup or a Fortune 500, increasingly looks like this: one or two deep specialists who own the core technical or domain complexity, surrounded by generalists who use AI to stretch across product, design, ops, marketing, and whatever else needs doing.

This is how a 5-person team ships like a 20-person team used to. Mostly by the floor that gets raised due to AI. The generalists execute at a level that's good enough for most tasks, while the specialists handle what actually requires hard-won judgment, and catch the mistakes that AI-augmented confidence can miss.

I saw a post the other day for a JD for a vibe code cleanup specialist, lmao. That's where we're heading, new roles that didn't exist six months ago.

When people talk about startups outrunning incumbents, they often attribute it to culture or speed. But increasingly, the structural advantage is simpler: small teams with AI can cover the same surface area as large teams without it, at a fraction of the cost and with faster feedback loops. And large companies that adopt this model internally, breaking into dozens of small, autonomous squads, get the best of both worlds: startup speed with enterprise resources. But they are climbing uphill due to all previous friction that exists, so that's the window that startups have to execute.

I increasingly think that we are heading towards a world where there are much fewer companies, but the ones that exist will be doing much much more.

And that's somewhat scary.

But what's scarier is not being valuable enough to be on one of these.

Stay safe, stay curious.