A practical example of Workflow vs Agent architecture in LangGraph. Given a topic, the system researches it, generates slides, validates quality, and publishes to Gamma — all from a single CLI command.
Built as a live demo for the webinar "Next Gen AI: Why Simple Prompting Is No Longer Enough".
Everyone talks about "AI agents", but what does that actually mean in code? This project implements the same task in two ways so you can see the difference:
- Workflow mode — the developer defines the execution graph. The LLM works inside each step.
- Agent mode — the developer provides tools and a prompt. The LLM decides what to call and when.
Same APIs, same LLM, same output — different architecture.
flowchart LR
subgraph WF["Workflow Mode — developer controls the flow"]
direction LR
R[Researcher] --> W[Writer] --> V[Validator]
V -->|PASS| P[Publisher]
V -.->|FAIL + feedback| W
end
flowchart TD
subgraph AG["Agent Mode — LLM controls the flow"]
O[ReAct Orchestrator]
end
O -->|Tool Use| S[search_web]
O -->|Routing| C[classify_topic]
O -->|Parallelization| G[generate_variants]
O -->|Voting| E[evaluate_and_select]
O -->|Human-in-the-Loop| H[request_human_approval]
O -->|Tool Use MCP| PG[publish_to_gamma]
S --> O
C --> O
G --> O
E --> O
H --> O
PG --> O
| Workflow | Agent | |
|---|---|---|
| Who decides flow | Developer (code) | LLM (runtime) |
| Searches | Always 3-5 (hardcoded) | Agent decides (0-8) |
| Writers | 1, fixed style | 3 in parallel, different styles |
| Quality check | if score >= 7 |
LLM picks best from 3 variants |
| Routing | None | Classifies topic, selects strategies |
| Human review | None | interrupt() before publish |
| Revisions | Fixed loop, max N | Agent decides: revise, research more, or publish |
| Patterns | 3 | 6 |
Agent mode implements 6 of 7 patterns from the Anthropic "Building Effective Agents" guide:
| # | Pattern | Tool | What Happens |
|---|---|---|---|
| 1 | Tool Use | search_web |
Tavily web search — agent decides how many queries |
| 2 | Routing | classify_topic |
Classifies topic type, returns 3 writing strategies |
| 3 | Parallelization | generate_variants |
3 writers run concurrently via asyncio.gather |
| 4 | Voting | evaluate_and_select |
Validator LLM scores all variants, picks the best |
| 5 | Human-in-the-Loop | request_human_approval |
LangGraph interrupt() pauses for human review |
| 6 | Orchestrator-Workers | ReAct agent | LLM autonomously delegates to specialized tools |
- Python 3.12+
- OpenAI API key
- Tavily API key (free tier works)
- (Optional) Gamma API key for live publishing
git clone https://github.com/TomMaSS/langgraph-workflow-vs-agent.git
cd langgraph-workflow-vs-agent
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtcp .env.example .env
# Edit .env with your API keys# Workflow mode (fixed pipeline)
python main.py "AI Agents in Production" --dry-run
# Agent mode (autonomous)
python main.py "AI Agents in Production" --mode agent --dry-run
# With Gamma publishing (requires GAMMA_API_KEY)
python main.py "AI Agents in Production" --mode agent --slides 10| Flag | Default | Description |
|---|---|---|
topic |
(prompted) | Presentation topic |
--mode |
workflow |
workflow or agent |
--dry-run |
false |
Skip Gamma, print markdown |
--slides N |
10 |
Number of slides |
--max-revisions N |
2 |
Max revision cycles (workflow only) |
Researcher → expands topic, runs 3-5 Tavily searches
Writer → generates slide markdown (speaker notes + bullets)
Validator → scores 0-10, sends back for revision if < 7
Publisher → sends to Gamma MCP or prints markdown
Every run follows the same path. Predictable, reliable.
RESEARCH → agent decides how many searches (saw it do 2-6)
ROUTING → classifies topic, picks 3 writing styles
GENERATION → 3 writers run in parallel, each with a different angle
EVALUATION → validator scores all 3, picks the best (Rich table in terminal)
HUMAN REVIEW → interrupt() — agent pauses, you approve/reject/give feedback
PUBLISHING → sends winning content to Gamma
The path changes based on the topic. If you give feedback, the agent loops back and regenerates.
.
|-- main.py # CLI entrypoint (--mode workflow|agent)
|-- graph.py # Workflow: StateGraph with explicit edges
|-- agent_graph.py # Agent: ReAct agent via create_react_agent()
|-- state.py # Workflow: shared state TypedDict
|
|-- agents/ # Workflow mode nodes
| |-- researcher.py # Topic expansion + Tavily search
| |-- writer.py # Slide content generation
| |-- validator.py # Quality scoring (PASS/FAIL)
| |-- publisher.py # Gamma MCP publishing
| |-- llm.py # LLM factory (shared)
|
|-- tools/ # Agent mode tools
| |-- agent_tools.py # 6 @tool functions
| |-- gamma_mcp.py # Gamma MCP client (shared)
| |-- search.py # Tavily wrapper (workflow)
|
|-- prompts/ # System prompts
| |-- orchestrator.md # Agent: strategy + rules
| |-- researcher.md # Workflow: researcher
| |-- writer.md # Workflow: writer
| |-- validator.md # Workflow: validator
|
|-- utils/
| |-- terminal.py # Rich console + file logging
| |-- retry.py # Tenacity retry helpers
The agent has hard limits to prevent runaway execution:
| Safeguard | Limit | What it prevents |
|---|---|---|
MAX_SEARCHES |
8 | Infinite search loops |
recursion_limit |
60 | Infinite agent cycles |
| Gamma timeout | 120s | Hanging MCP calls |
| LLM retry | 3 attempts | Rate limit failures |
See ARCHITECTURE.md for:
- Mermaid diagrams of both modes
- Sequence diagram of the full agent flow
- Side-by-side comparison table
- Shared storage design (why tools pass data via module variables)
- Key technical decisions