SaaStr runs 20 AI agents, 3 humans, flipped revenue -19% to +47%

Jason Lemkin's SaaStr is running 20+ AI agents alongside three humans in sales operations, driving YoY revenue from -19% to +47%. The new podcast 'The Agents' breaks down what actually works: daily maintenance, hallucination checks, and why Clay's AI recommended a model 2-5x more expensive than needed.

SaaStr runs 20 AI agents, 3 humans, flipped revenue -19% to +47%

The Setup

SaaStr is running 20+ AI agents with three humans in sales operations. Revenue went from -19% to +47% YoY. Jason Lemkin and Amelia launched 'The Agents' podcast to document what works, what breaks, and what sales teams need to know about AI in production.

No roadmap theatre. Just operational reality from a team shipping 10+ AI apps with 750K+ uses.

What They Learned

Vibe-Coded Apps Need Daily Maintenance

The demos look magical. The first version ships in an afternoon. Then Monday comes.

Every production AI app at SaaStr needs daily product-savvy human oversight. Not because the tools fail. Because agents drift, data changes, models update, and integrations break whether you are watching or not.

If your AI strategy stops at "we built some apps on Replit," you have done the fun part. The job starts now.

Hallucinations Are Daily Maintenance

The industry says hallucinations are mostly solved. In production, that is not what SaaStr sees.

10K, their AI VP of Marketing, compared the wrong year in an analysis this week. Made up a data point in another. These are not catastrophic. They are also not rare. They are daily maintenance items, and if nobody is reviewing outputs, hallucinated data ships to customers as facts.

The fix is not a better model. The fix is a process: someone reviews outputs every day.

The Upsell Trap

SaaStr was using Clay. Clay's agent recommended an approach requiring their most expensive model, at 2-5x the cost of the cheaper option that would have solved the use case. This happened around Clay's price increase announcement.

Was it intentional? Probably not. The agent likely was not trained on the new pricing structure, so it defaulted to the premium path. But the effect on the customer is identical whether the upsell is intentional or accidental.

Every B2B AI vendor needs to audit this now: when your agent recommends your own product, is it recommending the right tier? Or is it quietly steering customers to the most expensive option?

No Lead Left Behind

This is the actual unlock. AI agents follow up on every lead, every time. No rep burnout. No "I'll get to it Monday." No leads falling through cracks because someone was at quota and coasting.

The productivity gain is not speed. It is consistency. Every lead gets worked. That changes close rates.

What This Means for ANZ Sales Teams

SaaStr has no disclosed ANZ operations, but the learnings apply: AI agents in sales are not plug-and-play. They require daily oversight, quality baselines, and someone product-savvy enough to call BS when an agent blames a third-party tool for its own mistakes.

The comp angle: SaaStr achieved this revenue flip with three humans and 20+ agents. That is a very different headcount model than traditional sales ops scaling. For ANZ sales professionals, the question is not whether AI agents replace SDRs. The question is what the team structure looks like when one human can manage multiple AI agents instead of managing multiple SDRs.

The podcast is live. No hype. Just what works in production.