I’m standing on a stage in Vienna with three juggling clubs spinning in the air, and I’m trying to explain to a room full of software engineers why their AI architecture is wrong.
Not because I’m rude. Because the analogy is right there, and nobody in the room has picked it up yet.
Here’s the thing about juggling: every prop teaches you something different. Balls teach you focus. Rings teach you rhythm. Clubs teach you timing, and that mistakes are loud. The devil stick teaches you that some things can only be guided, not controlled.
I’ve been juggling for years - for real, not as a metaphor - and the longer I do it, the more I see it in my work. Especially now, when that work increasingly means designing and building teams of AI agents.
So let me give you the prop-by-prop breakdown. Because when I’m explaining agent types to a client, I reach for this framework before I reach for any diagram.
Prop 01
The Ball
Simple, Forgiving, a Great Starting Point
If you’ve never juggled before, you start with one ball. You throw it. You catch it. You add a second. Then three. The ball is the most forgiving prop there is - it’s round, it bounces if you drop it, and the feedback loop is immediate and honest.
In the AI agent world, a ball is your small, single-purpose, stateless agent.
It does one thing. It’s cheap. It’s fast. It doesn’t carry memory between calls. You give it an input, it gives you an output, you move on. My news scanner agent is a ball. Every Thursday morning at 10am, EventBridge triggers it, it reads a set of RSS feeds, filters for AI-relevant content, formats a digest, and drops it in Notion. Done. No history, no memory, no reasoning chain - just: input in, output out.
Balls are the agents that should make up most of your system. They’re the workhorses. They don’t dazzle anyone, but they show up every day without breaking.
What makes a ball agent?
- Short, bounded context window
- Single action or single decision
- Runs on a small or distilled model (Haiku-class, if you’re in the Claude ecosystem)
- Stateless - each run is independent
- Cheap enough that you don’t think twice about running it
Where balls break down: when the task requires context across time, when the output needs to reason about previous outputs, or when the stakes are high enough that you need the agent to understand rather than just process. A ball doesn’t understand. It catches and throws.
Ball Agent - at a glance
Haiku-class
Short, bounded
Fractions of a cent
Bounces - catches and recovers
Scanners, classifiers, pings
1 slice
Prop 02
The Ring
Rhythmic, Consistent, Requires Sustained Attention
Rings are interesting because they’re less intuitive than balls. They fly flat. They wobble if you throw them wrong. You have to spin them slightly on release to keep them stable in flight. Once you get the rhythm, though, they’re almost meditative - there’s a consistency to ring juggling that balls don’t have.
In the agent world, a ring is your medium agent: multi-step, with short-term memory, running a repeatable pattern.
Rings are your content drafters, your proposal generators, your meeting summarizers. They take a context window with some history in it, reason across it, produce structured output, and hand off. They’re not one-shot like balls - they hold a thread for the duration of a task - but they don’t have to remember across sessions.
My invoice chaser agent is a ring. It knows the current state of the invoice (context), it looks at what’s happened so far (short history), it decides whether to send a reminder or escalate, and it does it. Next week, it runs again with fresh context. The ring doesn’t need to remember last week’s run - the state is stored in Notion, and the agent reads it fresh each time.
What makes a ring agent?
- Medium context window (enough for a task’s full state)
- Multi-step reasoning or generation
- Uses a mid-tier model (Sonnet-class)
- May read from an external memory store, but doesn’t maintain memory natively
- Produces coherent, structured output that something else can act on
Where rings break down: when the task requires complex judgment across long histories, when the agent needs to synthesize information from many sources, or when it needs to make decisions that compound over time. A ring can hold a pattern, but it can’t improvise.
Ring Agent - at a glance
Sonnet-class
Medium - full task state
A few cents
Wobbles visibly - needs correction
Drafters, chasers, formatters
2 slices
Prop 03
The Club
Complex, Unforgiving, Loud When It Goes Wrong
Clubs are what people think of when they think of juggling. They’re the classic circus prop. And they are harder than they look - not because the throwing is complicated, but because clubs spin. Every throw has to be calibrated for rotation. Too little spin and the handle arrives before you expect it. Too much and you’re chasing a horizontal club through the air.
Dropping a ball is fine. Dropping a club in front of an audience is an event. Everyone notices. It’s loud.
In the agent world, a club is your large, long-context, high-stakes agent.
Clubs are your research synthesizers. Your architecture reviewers. Your agents that read multiple documents, reason across them, make complex decisions, and produce output that actually matters. They’re expensive to run - they use Opus-class models, they have long context windows, they take time - and when they fail, it’s visible.
My deep research agent is a club. When I ask it to synthesize the current state of multi-agent orchestration patterns, it uses Playwright to scrape sources, retrieves from my Bedrock Knowledge Base, reasons across everything, and produces a structured brief. It costs real money to run. It takes a few minutes. But the output is worth it - it replaces two hours of my own reading and synthesis.
What makes a club agent?
- Long context window
- Multi-source synthesis or complex multi-step reasoning
- Opus-class or high-context model
- Expensive per run - you don’t trigger this on a cron every 15 minutes
- High output quality expected - failures are costly
Where clubs break down: everywhere, if you use them for ball tasks. This is the mistake I see constantly. Someone builds a research-grade agent for a task that needs a simple classifier. They run Opus on a job Haiku could do in 200 milliseconds. They wonder why their costs are out of control.
The juggling principle: don’t use a club when the trick calls for a ball.
Club Agent - at a glance
Opus-class
Long - multi-source
Dollars, not cents
Loud drop - everyone notices
Research, proposals, architecture
4 slices
Prop 04
The Devil Stick
Autonomous, Unpredictable, Requires a Light Touch
The devil stick is the prop that teaches you something the others don’t: you cannot directly control it.
You use two hand sticks to keep the center stick spinning. You guide it. You correct it when it starts to drift. But the moment you grab it directly - the moment you try to take over - the whole thing falls apart. The skill is learning to influence without controlling. To react to what the prop does, not what you planned for it to do.
In the agent world, a devil stick is your autonomous, long-running agent - an agent you give a goal to, then watch.
These are the agents that scare people. The ones that browse the web, write and execute code, trigger other agents, and make decisions in a loop until they reach a goal or hit a stop condition. They’re not triggered once and finished. They run, observe, decide, act, observe again.
Otto - my AI community orchestration bot - has devil stick behaviors. When I ask him to plan the next Vienna meetup, he doesn’t just format a template. He checks Sessionize for speaker submissions, looks at the event calendar, drafts a shortlist of speaker suggestions, checks the venue availability pattern from past events, and proposes an agenda. He does it in a loop. I set the goal; he figures out the steps.
What makes a devil stick agent?
- Autonomous reasoning loop (think, act, observe, repeat)
- Tool access: can call APIs, read/write files, trigger other agents
- Goal-directed rather than instruction-directed
- Requires guardrails: stop conditions, maximum iterations, human-in-the-loop checkpoints
- Unpredictable in execution - same goal can take different paths
Where devil sticks break down: without guardrails, they run forever. Without a well-defined goal, they hallucinate a target. Without observability, you have no idea what they’re doing until they’ve already done it. The key insight from juggling: you have to stay actively engaged. The devil stick doesn’t run itself. You guide it.
Devil Stick Agent - at a glance
Variable (scales with task)
Self-extending across loop
Unpredictable - set a ceiling
Silent drift - may not tell you it failed
Goal-directed multi-step orchestration
Variable - define at design time
How the Props Work Together in a Team
Here’s what took me a while to understand: the prop taxonomy isn’t just for classifying individual agents. It’s a design language for the whole crew.
When I built Otto, I started with one ball. A single Slack bot that could answer questions about upcoming events. Stateless. Simple. It worked. Then I noticed what it couldn’t do - and instead of making it smarter, I added more props.
Today Otto is an ensemble of seven agents:
- Four balls: ingestion, notification routing, FAQ responses, calendar sync
- Two rings: speaker coordination and event drafts
- One orchestrating layer that behaves more like a devil stick - watching the whole system and deciding which agent needs to fire
The insight from this: never make one agent smarter when you can give it a specialized colleague instead.
This mirrors how juggling actually scales. You don’t start with a complicated prop and simplify down. You master one ball, add a second, add a third. When you can handle three balls cleanly, you introduce rings into the pattern - not to replace the balls, but because the mix becomes richer.
Never make one agent smarter when you can give it a specialized colleague instead. The right question is never “can I make this agent handle more?” - it’s “what’s the minimum intelligence this task actually requires?”
Running a community works the same way. I started managing the AWS User Group Vienna entirely manually. Spreadsheets. Calendar reminders. DMs. I was the only agent in the system, and I was running at capacity. When I reached my limit, I didn’t try to be a better human - I built a ball. Then a ring. Then gradually assembled a crew that handles the routine work so I can focus on the parts that genuinely require judgment.
The team model isn’t about replacing human work. It’s about right-sizing each task to the right prop.
The Ensemble Capacity Model
A juggler has a total cognitive load they can sustain. Research into physical juggling suggests around 11 units of sustained attention. The unit cost varies by prop because they demand different kinds of focus:
| Prop | Juggler attention | AI model tier | Pizza slices |
|---|---|---|---|
| Ball | 1 unit | Haiku-class | 1 slice |
| Club | 2 units | Opus-class | 4 slices |
| Ring | 3 units | Sonnet-class | 2 slices |
Notice something interesting: in physical juggling, rings are the most attention-demanding per item - they require more precision to maintain than clubs. But in AI agent design, clubs (Opus-class) are the most expensive to run. The analogy isn’t perfect. It’s a map, not a mirror.
What both models capture is the same underlying truth: you can’t run everything at maximum complexity simultaneously. A juggler mid-routine has no attention left to improvise. An AI system running three concurrent Opus agents has no token budget left for anything else.
The load formula for an AI session window: balls × 1 + rings × 2 + clubs × 4 ≤ 16 slices
The full resource allocation system lives in Post 4: The Complete Juggling-Pizza Framework.
The Environment Changes Everything
Here’s the juggling insight that most agent architecture discussions miss: the same prop behaves differently depending on where you’re juggling.
Indoors, no wind, perfect lighting - you juggle clubs the way you practiced them. You know exactly what’s coming. Outdoors, light breeze - the rings wobble slightly. You adjust. Full wind and rain - you don’t juggle clubs at all. You grab the balls, because the balls are forgiving enough to survive the chaos.
Agent systems are the same. The “environment” is your production context:
- Dev/staging (indoors): predictable, controlled, no real consequences. Test your clubs here. Try the devil stick. See what breaks.
- Non-critical production (light breeze): moderate stakes. Use rings. Run clubs with human review. Log everything.
- High-stakes production (full wind): don’t run devil sticks without human checkpoints. Consider whether a ring can do the job instead of a club. Have fallbacks for every agent that matters.
I watched a team deploy an autonomous agent to their customer-facing production system - basically, a devil stick outdoors in a thunderstorm - with no guardrails, no human review, and no stop conditions. The agent decided that “help the customer resolve their issue” included sending a full account credit without any approval flow. The agent was technically correct. The business was furious.
Wrong prop for the environment.
The Prop Selection Checklist
Before I design any agent, I ask three questions:
1. How much does it need to remember? Nothing = ball. The current task = ring. Across many sources and long history = club. Across an autonomous run with self-generated memory = devil stick.
2. What happens when it fails? Bounce and recover = ball. Noticeable but handleable = ring. Loud and visible = club. Potentially cascading and hard to reverse = devil stick. Match the stakes to the prop.
3. What’s the environment? Controlled dev = fine to experiment with clubs and devil sticks. Production with real consequences = start with balls and rings. Add complexity only when simpler props can’t do the job.
The patterns behind each of these questions are explored further in Post 3: Dropping the Ball Is the Point - where all three drop types map directly to root causes in AI systems.
Juggling-Pizza Framework - Full Series
Linda Mohamed is an AWS Hero and cloud consultant in Vienna. She runs the AWS User Groups in Vienna and Linz, builds AI agent systems on AWS Bedrock, and occasionally juggles clubs on stage to make a point about distributed architectures.