Yesterday's Build Your AI Employee workshop ran for three hours.
When we started, people had jobs they were tired of doing themselves. When we finished, those jobs had agents.
Here's what actually happened.
The First 30 Minutes Are Always the Same
Everyone comes in with a vague idea of what they want to automate. "I want AI to handle my email." "I want it to follow up with leads." "I want it to write my content."
The first thing we do is make that specific.
Not "handle email" — which emails? From who? What does a good response look like? What should it escalate versus resolve on its own? What tone? What's off-limits?
The people who've thought about this for 10 minutes and written it down build better agents than the people who've thought about it for 10 months and haven't. That's not an accident. Clarity is the variable.
The bottleneck isn't the AI. The bottleneck is knowing what you want it to do.
What People Actually Built
We had business owners across three industries. Here's what they built.
An intake agent. A small law firm built a workflow for new client inquiries. Client fills out a form, the agent classifies the legal matter (contract, employment, IP, general inquiry), extracts the key facts, drafts a preliminary response email with next steps, and flags anything that needs attorney review. The attorney now sees a classified summary and a ready-to-send email, not a raw form submission. They estimate 20 minutes of admin time eliminated per inquiry. They get about 30 a month. That's 10 hours.
A follow-up agent. A home services company built a follow-up sequence for estimates that weren't accepted. Customer gets a quote, doesn't respond in three days, the agent sends a check-in — not a sales push, a "did you have questions about what we quoted?" note. Personalized, pulls from the actual estimate details, signs off as the owner. That kind of follow-up almost never happens manually because it feels awkward. Automated, it just feels attentive.
A content agent. A consultant built a post generator. She has a running list of questions clients ask her repeatedly. The agent takes one question at a time, drafts a LinkedIn post in her voice — trained on 20 examples she provided — and queues it for her review. She approved four posts during the workshop. She'll use it 45 minutes every Sunday to schedule the week.
None of these required a developer. None of them are science fiction. All three took under three hours to go from "I want to automate something" to a working system.
The Thing People Get Wrong Before They Start
They think building an AI agent is a technical problem.
It's a management problem.
You can't manage a team member who doesn't have a clear job description. You can't audit work when you haven't defined what good looks like. You can't improve a system you haven't measured.
The system prompt is the job description. The test run is the interview. The first week of output is the probationary period.
Treat it like hiring and it performs like an employee. Treat it like a magic box and it performs accordingly.
What's Next
The AI Employee workshop builds one role.
Claude OS in a Day builds the operating system — the CLAUDE.md that gives Claude full context about your business, the MCP servers that connect it to your actual tools, and three automated workflows that run without you initiating them.
It's the difference between one good hire and a functioning department.
Claude OS in a Day is Monday, April 27. Five seats. If you were in yesterday's session, Monday is what you build on top of it.
Claude for Agencies is Wednesday, April 29 — for people running client-facing businesses who want to remove themselves as the delivery bottleneck.
Both sessions are recorded. All replays go to registered attendees.
— Murph
