This is Part 3 of the series.
- Part 1: Why Python Still Dominates in 2026
- Part 2: Build Your Own AI Chatbot — RAG From Scratch to Deployment
- Part 3: One AI Is No Longer Enough — LangGraph Multi-Agent Systems ← You are here
📌 Level: Intermediate–Advanced (Parts 1–2 are sufficient background) ⏱️ Reading time: ~13 min / Hands-on time: ~3–4 hours 🛠️ End result: A 3-agent AI team that autonomously researches, drafts, and fact-checks content
“Why does AI fall apart when I give it something complex?”
You’ve felt it — whether with GPT or Claude, throwing a complicated task at a single AI produces inconsistent results. Ask it to research, write, and verify all at once and something always slips through the cracks.
People work the same way. A journalist who researches, writes, and fact-checks their own article without anyone else involved produces lower-quality work. That’s why there are teams.
In 2026, AI started working in teams too.
📊 Table of Contents
- What Multi-Agent Means — Thinking of It as an “AI Team”
- Why LangGraph
- What We’re Building: A 3-Agent Research Team
- Environment Setup
- Three Core Concepts — State, Node, Edge
- Step-by-Step: Building 3 Agents
- Coordinating the Team with a Supervisor
- Running the Full Pipeline
- Real-World Patterns & Extension Ideas
1. What Multi-Agent Means — “AI Teammates”
The AI we built in previous parts was a solo freelancer — it receives a question, thinks it through alone, and answers alone.
A multi-agent system is a team with division of labor.
[Single AI]User → AI (does everything alone) → Result └ Research, writing, verification — all solo[Multi-Agent]User → Supervisor (team lead) ├→ Researcher Agent (research specialist) ├→ Writer Agent (writing specialist) └→ Fact-checker Agent (verification specialist) ↓ Final output
Each agent only has to be good at its own role. The supervisor (team lead) coordinates the sequence.
In 2026, most developers are still stuck with single-chain pipelines, and those pipelines break the moment a task gets complex. LangGraph lets you build stateful, cyclical AI workflows where agents collaborate, review each other’s work, and loop until the job is done.
2. Why LangGraph
There are several agent frameworks. Why LangGraph?
| Framework | Characteristics | Best for |
|---|---|---|
| LangChain (simple chains) | Linear A→B→C flow | Simple pipelines |
| CrewAI | Fast prototyping | Idea validation |
| AutoGen | Conversation-centered agents | Conversational collaboration |
| LangGraph | Graph-based, fine-grained control | Production deployment |
The AI agent framework landscape consolidated in 2026. Microsoft moved AutoGen to maintenance mode. CrewAI is still used for quick prototypes. But for production workloads that need fine-grained control, durable execution, and human-in-the-loop capabilities, LangGraph is the default.
LangGraph crossed 126,000 GitHub stars as of April 2026. Companies like Klarna, Uber, Replit, and Elastic run it in production.
Three things make LangGraph special:
- Cycles — Unhappy with the result? Loop back and retry.
- Durable execution — If an agent fails mid-task, it resumes from exactly where it stopped.
- Conditional branching — “If the result is good, stop. If not, research again.” Natural to express.
3. What We’re Building: A 3-Agent Research Team
Scenario: “Enter a topic → AI automatically researches it, drafts a blog post, then verifies the facts.”
[User Input: "Python 2026 Trends"] ↓[Supervisor Agent] ← Team lead. Decides the overall flow ↓[Researcher Agent] ← Web research, information gathering ↓[Writer Agent] ← Drafts a blog post from the research ↓[Fact-checker Agent] ← Verifies claims; approves or requests revisions ↓ [Final Output] ← Verified blog draft
Each agent receives the previous agent’s output and performs its role. The fact-checker must approve for a final output to be produced. If it rejects, the draft goes back to the writer for revision.
4. Environment Setup
pip install langgraph langchain langchain-anthropic \ langchain-community python-dotenv
.env file
ANTHROPIC_API_KEY=your-api-key-here
Folder structure
multi-agent/├── .env├── main.py # Entry point├── state.py # Shared state definition├── agents.py # The 3 agents├── supervisor.py # Supervisor logic└── graph.py # Graph assembly
5. Three Core Concepts — State, Node, Edge
Before looking at code, understand these three concepts. Everything else clicks once you have them.
① State — The Team’s Shared Notepad
A single dictionary shared by all agents. Agents read from it and write their results back to it.
# Team shared notepadstate = { "topic": "Python 2026 Trends", # Input "research": "", # Researcher fills this "draft": "", # Writer fills this "fact_check": "", # Fact-checker fills this "approved": False, # Final approval status "iteration": 0 # Revision count}
② Node — Each Team Member
A single unit of work in the graph. Each agent is one node — a function that receives State and returns State.
def researcher_node(state): # Read topic from state # Perform research # Update state's research field return {"research": "...research results..."}
③ Edge — The Connection Between Team Members
Lines connecting nodes. Conditional Edges let you branch: “in this case go to A, otherwise go to B.”
# Branch based on fact-check resultdef should_continue(state): if state["approved"]: return "END" # Approved → stop elif state["iteration"] >= 3: return "END" # 3+ revisions → stop anyway else: return "writer" # Not approved → send back to writer
Those three concepts are all of LangGraph. The rest is assembly.
6. Step-by-Step: Building 3 Agents
state.py — Shared State Definition
# state.pyfrom typing import TypedDictclass ResearchState(TypedDict): """State shared across the entire team""" topic: str # Research topic research_notes: str # Researcher's findings draft: str # Writer's draft fact_check_result: str # Fact-checker's review approved: bool # Final approval revision_notes: str # Revision requests iteration: int # Current revision count (prevents infinite loops) final_output: str # Final deliverable
agents.py — The 3 Agents
# agents.pyimport osfrom dotenv import load_dotenvfrom langchain_anthropic import ChatAnthropicfrom langchain.prompts import ChatPromptTemplatefrom state import ResearchStateload_dotenv()# Shared LLM — all agents use the same model; roles are defined by promptsllm = ChatAnthropic( model="claude-sonnet-4-20250514", api_key=os.getenv("ANTHROPIC_API_KEY"), temperature=0.3, max_tokens=2000)# ────────────────────────────────────────# Agent 1: Researcher# ────────────────────────────────────────def researcher_agent(state: ResearchState) -> dict: """ Takes a topic and gathers key information. In production, wire in a real search tool (Tavily, SerpAPI, etc.). Here we rely on the LLM's training knowledge. """ print("🔍 [Researcher] Starting research...") prompt = ChatPromptTemplate.from_template("""You are a professional researcher. Gather key information about the given topic.Topic: {topic}Provide your findings in the following format:## Key Facts (5–7 items)- Fact 1: ...- Fact 2: ...## Notable Statistics & Data- ...## Latest Trends- ...## Relevant Examples- ...Include only accurate and specific information. Mark uncertain information as "(estimated)".""") chain = prompt | llm result = chain.invoke({"topic": state["topic"]}) print(f"✅ [Researcher] Complete — {len(result.content)} chars") return {"research_notes": result.content}# ────────────────────────────────────────# Agent 2: Writer# ────────────────────────────────────────def writer_agent(state: ResearchState) -> dict: """ Writes a blog draft based on the researcher's notes. Incorporates any revision requests from the fact-checker. """ print("✍️ [Writer] Starting draft...") revision_context = "" if state.get("revision_notes"): revision_context = f"""⚠️ Fact-checker revision requests:{state['revision_notes']}Please address all of the above in your revision.""" prompt = ChatPromptTemplate.from_template("""You are a technical blog writer. Write a clear, engaging blog post using the researcher's notes.Topic: {topic}Research notes:{research_notes}{revision_context}Writing guidelines:- Include a title- Structure: intro → body (3–4 sections) → conclusion- Explain technical terms clearly- Use specific numbers and examples- Target length: ~800–1,000 words""") chain = prompt | llm result = chain.invoke({ "topic": state["topic"], "research_notes": state["research_notes"], "revision_context": revision_context }) print(f"✅ [Writer] Draft complete — {len(result.content)} chars") return { "draft": result.content, "revision_notes": "" # Clear revision notes after incorporating }# ────────────────────────────────────────# Agent 3: Fact-checker# ────────────────────────────────────────def fact_checker_agent(state: ResearchState) -> dict: """ Reviews the writer's draft against the research notes. Identifies factual errors or exaggerations. Returns either APPROVED or a revision request. """ print("🔎 [Fact-checker] Starting review...") prompt = ChatPromptTemplate.from_template("""You are a rigorous fact-checker. Review the blog draft.Original research notes:{research_notes}Draft to review:{draft}Check for:1. Claims not supported by the research notes2. Inaccurate statistics or figures3. Exaggerated or misleading language4. Logical inconsistenciesRespond ONLY in the following format:Verdict: [APPROVED or NEEDS_REVISION]Review notes:(If approved: 1–2 sentences on why it passes)(If revision needed: specific items and reasons)""") chain = prompt | llm result = chain.invoke({ "research_notes": state["research_notes"], "draft": state["draft"] }) content = result.content approved = "APPROVED" in content.upper() if approved: print("✅ [Fact-checker] Approved!") return { "fact_check_result": content, "approved": True, "final_output": state["draft"], "iteration": state.get("iteration", 0) + 1 } else: print(f"⚠️ [Fact-checker] Revision requested (attempt {state.get('iteration', 0) + 1})") return { "fact_check_result": content, "approved": False, "revision_notes": content, "iteration": state.get("iteration", 0) + 1 }
supervisor.py — The Team Lead
# supervisor.pyfrom state import ResearchStatedef should_continue(state: ResearchState) -> str: """ Reads the fact-check result and decides the next step. Returns: - "writer" → Send back to writer for revision - "end" → Work complete """ MAX_ITERATIONS = 3 # Prevent infinite loops current_iter = state.get("iteration", 0) if state.get("approved", False): print(f"\n🎉 [Supervisor] Final approval! (Total attempts: {current_iter})") return "end" if current_iter >= MAX_ITERATIONS: print(f"\n⏱️ [Supervisor] Max revisions reached. Finalizing last version.") return "end" print(f"\n🔄 [Supervisor] Sending revision back to writer (attempt {current_iter + 1}/{MAX_ITERATIONS})") return "writer"
graph.py — Assembling the Graph
# graph.pyfrom langgraph.graph import StateGraph, ENDfrom state import ResearchStatefrom agents import researcher_agent, writer_agent, fact_checker_agentfrom supervisor import should_continuedef build_research_graph(): """ Assembles the 3-agent team as a graph. Flow: researcher → writer → fact_checker ↑ ↓ └── (revision loop) ──┘ ↓ (approved or max reached) END """ # 1. Initialize graph workflow = StateGraph(ResearchState) # 2. Register nodes workflow.add_node("researcher", researcher_agent) workflow.add_node("writer", writer_agent) workflow.add_node("fact_checker", fact_checker_agent) # 3. Set entry point workflow.set_entry_point("researcher") # 4. Regular edges (sequential flow) workflow.add_edge("researcher", "writer") workflow.add_edge("writer", "fact_checker") # 5. Conditional edge (branches based on fact-checker output) workflow.add_conditional_edges( "fact_checker", should_continue, { "writer": "writer", "end": END } ) # 6. Compile into an executable app return workflow.compile()
8. Running the Full Pipeline
# main.pyfrom dotenv import load_dotenvfrom graph import build_research_graphload_dotenv()def run_research_team(topic: str) -> dict: print(f"\n{'='*60}") print(f"🚀 Research Team Activated — Topic: '{topic}'") print(f"{'='*60}\n") app = build_research_graph() initial_state = { "topic": topic, "research_notes": "", "draft": "", "fact_check_result": "", "approved": False, "revision_notes": "", "iteration": 0, "final_output": "" } result = app.invoke(initial_state) print(f"\n{'='*60}") print("📄 Final Output") print(f"{'='*60}") final = result.get("final_output") or result.get("draft", "No output") print(final) print(f"\n{'='*60}") print(f"📊 Run Summary") print(f" - Total revisions: {result.get('iteration', 0)}") print(f" - Final status: {'✅ Approved' if result.get('approved') else '⏱️ Timed out'}") print(f"{'='*60}\n") return resultif __name__ == "__main__": result = run_research_team("Python and AI Development Trends in 2026")
Sample output:
============================================================🚀 Research Team Activated — Topic: 'Python and AI Development Trends in 2026'============================================================🔍 [Researcher] Starting research...✅ [Researcher] Complete — 1243 chars✍️ [Writer] Starting draft...✅ [Writer] Draft complete — 987 chars🔎 [Fact-checker] Starting review...⚠️ [Fact-checker] Revision requested (attempt 1)🔄 [Supervisor] Sending revision back to writer (attempt 1/3)✍️ [Writer] Starting draft...✅ [Writer] Draft complete — 1054 chars🔎 [Fact-checker] Starting review...✅ [Fact-checker] Approved!🎉 [Supervisor] Final approval! (Total attempts: 2)============================================================
9. Real-World Patterns & Extension Ideas
The pattern we built applies directly to many real workflows.
Pattern 1: Parallel Execution (Speed Improvement)
# Multiple researchers running simultaneouslyworkflow.add_node("researcher_news", news_researcher)workflow.add_node("researcher_data", data_researcher)workflow.add_node("aggregator", combine_results)# Both researchers run in parallel, results merged in aggregatorworkflow.add_edge("researcher_news", "aggregator")workflow.add_edge("researcher_data", "aggregator")
Pattern 2: Human-in-the-Loop
# Pause for human approval before sensitive actionsfrom langgraph.types import interruptdef sensitive_action_node(state): approval = interrupt({ "message": "Ready to publish this content. Proceed?", "draft": state["draft"] }) if approval.get("approved"): return {"final_output": state["draft"]} else: return {"revision_notes": approval.get("feedback", "")}
Use Case Ideas
| Use Case | Agent Composition |
|---|---|
| Weekly report automation | Data collection → Analysis → Writing → Review |
| Code review bot | Static analysis → Security check → Performance review → Feedback |
| Customer email handling | Classification → RAG retrieval → Draft → Tone review |
| Competitor monitoring | Crawling → Summary → Comparison → Report |
Wrapping Up — Agents Are Teammates, Not Just Tools
Look back at the team we built today.
The researcher gathers information. The writer turns it into prose. The fact-checker catches the errors. And the supervisor coordinates the whole thing.
That is exactly how a good team operates. Whether AI or human.
The heart of multi-agent systems is not the technology. It is designing the right division of labor to solve the problem at hand. The code is just how you implement that design.
In Part 4, we’ll add memory (long-term recall) to this agent team. Right now the agents forget everything when the session ends. With memory, they’ll remember last week’s articles, automatically avoid repetitive topics, and learn your writing style over time.
🔖 Other posts in this series
- Part 1: Why Python Still Dominates in 2026
- Part 2: Build Your Own AI Chatbot — RAG From Scratch to Deployment
- Part 3: One AI Is No Longer Enough — LangGraph Multi-Agent Systems ← You are here
- Part 4: AI That Finally Remembers — Complete LangGraph Memory Guide (next)
Tags: #Python #LangGraph #MultiAgent #AIAgents #LangChain #AIAutomation #DevTutorial #2026 #AgenticAI #ProductionAI