Most AI Agents Are Busywork in Disguise. Here's How to Tell the Difference.
April 22, 2026
Most people building AI agents are solving the wrong problem. They're automating tasks that didn't need automation, connecting tools that didn't need connecting, and calling it productivity. The result is a complicated system that breaks at 2am and takes longer to manage than the manual process it replaced.
I've been watching this pattern for a while, and a recent breakdown in Lenny's Newsletter put it into words I've been looking for. Not all AI agents are created equal. The gap between a useful agent and a busy one is bigger than most people realize.
Here's how I think about it when I'm building for my own business or for clients.
---
Useful Agents Do One Thing Without You Watching
The best AI agent I've ever built does exactly one job. When a new lead books a call through TidyCal, it pulls their LinkedIn profile, summarizes their business context, and drops a briefing note into Notion before I even open my laptop.
That's it. One trigger, one output, one clear win.
Contrast that with agents people demo on YouTube that scrape data, send emails, post to social, update a CRM, and send a Slack notification, all in one chain. It looks impressive. It also breaks constantly, and when it does, you have no idea which step failed.
The takeaway: Before you build anything, write down the one output the agent produces. If you can't name it in a single sentence, you don't have a clear enough problem yet.
---
The Difference Between Automation and Agency
This is the distinction most people miss. Automation follows a fixed script. An agent makes a decision.
If you build an n8n workflow that takes a form submission and sends a welcome email, that's automation. Reliable, useful, not an agent.
An agent looks at the form submission, decides which welcome sequence fits this particular person based on what they said, and routes them accordingly. It's reading context and choosing a path.
Most of what gets called "AI agents" right now is just automation with a chat interface bolted on. That's not a bad thing, but you should know what you're actually building so you can set realistic expectations for yourself and your clients.
The takeaway: Ask yourself whether your "agent" is following rules or evaluating context. If it's following rules, it's a workflow. That's fine, just name it correctly so you know when to upgrade it.
---
Where Agents Actually Break Down for Small Business Owners
The failure mode I see most often isn't technical. It's scope creep at the design stage.
Someone starts by wanting an agent to handle inbound enquiries. Then they add, "Oh, it should also check if we've spoken to this person before." Then, "And maybe pull their invoice history." Then, "Can it also draft a follow-up email?"
By the time they're done, they've described something that needs a dev team to maintain. But they don't have a dev team. They have n8n, a Claude API key, and twenty minutes before school pickup.
The agents that actually stay running in small businesses are the ones that do something narrow and concrete. One source of input. One decision or action. One place the output lands.
If you're using n8n, that means one trigger node, minimal branching, and a clear endpoint. If you're using a tool like Relevance AI or Make, same principle. Small surface area beats impressive architecture every time.
The takeaway: If your agent diagram has more than five nodes, split it into two separate agents. You'll spend less time debugging and more time actually using the output.
---
How to Evaluate Any Agent Before You Build It
I run every agent idea through three questions before I touch a single node in n8n.
First, what breaks if this agent does nothing? If the answer is "nothing urgent," it's a nice-to-have. Build it later.
Second, how often does this task actually happen? An agent that runs twice a month isn't worth the build time for most small business owners. The sweet spot is something that happens at least weekly.
Third, can I verify the output in under ten seconds? If checking whether the agent worked correctly takes longer than doing the task manually, you've created overhead, not efficiency.
These aren't complicated questions. But most people skip them because they're excited about the build. I get it. The build is fun. The criteria are boring. But boring criteria save you from building things that waste your time.
The takeaway: Run your next agent idea through those three questions before you open n8n or ChatGPT. If it passes all three, build it. If it fails any one of them, either sharpen the idea or shelve it.
---
The most useful thing you can do today is look at the last AI agent you built or planned to build, and ask whether it genuinely makes a decision or just follows a script. That answer tells you exactly what to fix first.