LangGraph | Part 2/3 of Generative AI for JS Developers
Dive Deep into LangGraph and it's integration with JS/TS

Introduction
LangGraph is an innovative graph-based orchestration framework designed for creating stateful, long-running applications that involve multiple actors. Imagine agents that can utilize tools, save their progress, retry tasks, branch out, collaborate with other agents, and maintain both short- and long-term memory. It’s a key part of the LangChain ecosystem and comes as a JavaScript/TypeScript SDK (@langchain/langgraph), allowing you to seamlessly integrate agent-driven workflows into Node.js backends, serverless functions, or full-stack applications.
In this guide, we’ll go on a practical journey through LangGraph: what it excels at, the core concepts you need to know, and step-by-step JavaScript examples ranging from quick-starts to more complex setups involving retrieval-augmented generation (RAG), memory management, and custom nodes. We’ll also have some debugging and deployment tips, along with best practices, official patterns and ready-to-use code snippets to help you hit the ground running!
TL;DR — When to pick LangGraph
You want fine-grained control over multi-step agent logic (branching, retrying, loops, tool orchestration).
You need thread-scoped persistence (short-term memory / checkpointing) or long-running workflows.
You want to compose LLM steps as nodes and edges (a graph) instead of linear chains.
LangGraph complements LangChain — use LangChain for simple chains and LangGraph when you need agentic orchestration.
Core concepts
StateGraph / graph — the graph definition: nodes (steps) + edges (transitions/conditions). The graph operates on an annotation (typed shape) representing the agent’s state.
Nodes — units of work (call an LLM, call tool(s), do computation). Examples:
ToolNode, custom async functions.Edges / conditional edges — determine next node based on state (e.g., if LLM returned a tool call → go to
toolsnode).Annotations — typed state schemas (common one:
MessagesAnnotationthat stores chat history).Checkpointing / MemorySaver — persist thread specific state so conversation can resume later; supports short-term and long-term memory patterns.
Tool integrations — built-in connectors let your agent call web search, SQL, vector stores, etc.
Quickstart — tiny ReAct agent (JS/TS)
Prerequisites: Node 18+. Let’s start with installing the following packages -
npm install @langchain/core @langchain/langgraph @langchain/openai @langchain/community (exact packages may vary with versions).
Now, let’s see the example below adapted from the official LangGraph JS quickstart.
// agent.mts (TypeScript module file). Run with: npx tsx agent.mts
// IMPORTANT: set your API keys in env vars before running.
process.env.OPENAI_API_KEY = "sk-..."; // keep secret
process.env.TAVILY_API_KEY = "tvly-..."; // only if using Tavily search
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";
import { HumanMessage } from "@langchain/core/messages";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
async function main() {
const agentTools = [new TavilySearchResults({ maxResults: 3 })];
const agentModel = new ChatOpenAI({ temperature: 0 });
// Persisted short-term memory / checkpointing
const agentCheckpointer = new MemorySaver();
const agent = createReactAgent({
llm: agentModel,
tools: agentTools,
checkpointSaver: agentCheckpointer,
});
// Start a thread with an ID so state can be resumed later
const state = await agent.invoke(
{ messages: [new HumanMessage("what is the current weather in sf")] },
{ configurable: { thread_id: "thread-42" } }
);
console.log("Agent reply:", state.messages.at(-1)?.content);
}
main().catch(console.error);
This example shows how to create a simple Reason+Act agent (ReAct). It wires a Chat model, a search tool, and a memory saver so conversation state is persisted between runs.
Building a custom graph — more explicit control
If you need to observe or customize the execution flow (and not use the createReactAgent helper), build a StateGraph manually. This example follows the official pattern: a callModel node that calls the LLM, a ToolNode for tools, and conditional edges deciding whether to continue or finish.
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
const tools = [new TavilySearchResults({ maxResults: 3 })];
const toolNode = new ToolNode(tools);
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 }).bindTools(tools);
function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
const lastMessage = messages[messages.length - 1] as AIMessage;
if (lastMessage.tool_calls?.length) return "tools";
return "__end__";
}
async function callModel(state: typeof MessagesAnnotation.State) {
const response = await model.invoke(state.messages);
return { messages: [response] };
}
const workflow = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addEdge("__start__", "agent")
.addNode("tools", toolNode)
.addEdge("tools", "agent")
.addConditionalEdges("agent", shouldContinue);
const runnable = workflow.compile();
// invoke the graph
const finalState = await runnable.invoke({
messages: [new HumanMessage("what is the weather in sf?")],
});
console.log(finalState.messages.at(-1)?.content);
This pattern gives you full visibility into how nodes, edges, and conditional routing work. Use it when you want retry loops, self-reflection nodes, or route-to-supervisor logic.
Memory patterns (short-term and long-term)
LangGraph treats short-term memory as part of the graph state (thread-scoped). You persist state via a checkpointer (e.g., MemorySaver, InMemorySaver, or database-backed savers). This allows resuming conversations and avoids re-computing earlier steps. For long-term memory, you extract memories and store them in an external store (Zep, a vector DB, or your DB) and consult them as needed (e.g., personalization/history).
Minimal example: wire an in-memory checkpointer at compile time:
import { InMemorySaver } from "@langchain/langgraph/checkpoint"; // pseudo import - check exact path
const checkpointer = new InMemorySaver();
const app = workflow.compile({ checkpointer });
// (Official docs show how to plug in Zep or other persistent backends for production memory).
Retrieval-Augmented Generation (RAG) with LangGraph + vector stores
LangGraph is commonly used to build agentic RAG systems that decide when to retrieve, which retriever to use, and when to re-retrieve if the answer quality is low. Typical flow:
User question → agent decides “do I need to consult docs?”
If yes → call vector store retriever → attach retrieved docs into graph state → call LLM with context → evaluate answer; if unsatisfactory, reroute to improved retrieval parameters (adaptive RAG).
LangGraph examples cover agentic RAG, local LLaMA3 setups, Elasticsearch/pgvector integrations, and adaptive/fallback strategies.
Small RAG sketch (conceptual):
// Pseudocode: create retriever node, retrieval decision node, and generate node.
// Use a real vectorstore (pgvector, Pinecone, Elasticsearch) and LangChain's vectorstore retriever helpers.
See the official RAG Tutorials for full step-by-step TypeScript examples (indexing, retriever, prompts, and agent orchestration).
Multi-agent & supervisor patterns
LangGraph shines where multiple agents interact — e.g., a worker agent does research, a supervisor agent verifies and approves, and a coordinator routes tasks. The graph model makes it natural to spawn or call subgraphs, gather outputs, and make decisions. There are official examples and templates (Agent Supervisor, Agentic RAG, SQL agent). Use them as templates to build complex pipelines.
Best practices & gotchas
Checkpoint early: persist important state between critical nodes so you can resume after crashes.
Keep context small: long message histories hurt LLMs — use retrieval and summarization for older context.
Use conditional edges for clarity: avoid implicit branching in prompts; use graph edges so decisions are visible and testable.
Test tool calls locally: wrap external API calls (search, DB) so you can mock them in tests.
Enable tracing in staging: LangSmith tracing helps find loops, hallucinations, and poor tool routing.
Example: lightweight long-running workflow (skeleton)
// Sketch: a long-running task that may need to re-run tool calls and persists checkpoints.
const graph = new StateGraph(MyAnnotation)
.addNode("plan", planNode) // LLM outlines steps
.addNode("execute_step", executeNode) // runs a tool / API
.addNode("evaluate", evalNode) // LLM evaluates results, decides next step
.addEdge("__start__", "plan")
.addEdge("plan", "execute_step")
.addConditionalEdges("execute_step", decideNext) // loop or finish
const runnable = graph.compile({ checkpointer: myPersistentSaver });
The checkpointer enables your process to resume after crashes or to pause & continue later — ideal for long-running automations.
Final notes & recommendations
LangGraph gives JS/TS developers strong primitives for reliable, debuggable, stateful agentic apps. Start with the
createReactAgenthelper to prototype, then move toStateGraphwhen you need more control.If your needs are simple, LangChain chains are lighter; use LangGraph when you need complex orchestration, persistent threads, multi-agent flows, or supervisor logic.




