Skip to main content

Documentation Index

Fetch the complete documentation index at: https://langchain-5e9cc07a-preview-opensw-1778791721-ab646b9.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

LangChain agents are built on LangGraph, so they support the same streaming stack with agent-focused projections for messages, tool calls, state, and custom updates. For most application and frontend use cases, use Event Streaming through stream_events(..., version="v3"). Event Streaming returns a run object with typed projections, so each projection can be consumed independently instead of parsing stream-mode tuples.
import { createAgent, tool } from "langchain";
import * as z from "zod";

const getWeather = tool(
  async ({ city }) => `It's always sunny in ${city}!`,
  {
    name: "get_weather",
    description: "Get weather for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const agent = createAgent({
  model: "gpt-5-nano",
  tools: [getWeather],
});

const stream = await agent.streamEvents(
  { messages: [{ role: "user", content: "What is the weather in SF?" }] },
  { version: "v3" }
);

for await (const message of stream.messages) {
  for await (const delta of message.text) {
    process.stdout.write(delta);
  }
}

const finalState = await stream.output;

What you can stream

ProjectionUse
for event in streamRaw protocol events with full envelope and access to every channel.
stream.messagesModel message streams, one per LLM call.
message.textText deltas and final text for a message.
message.reasoningReasoning deltas for models that expose reasoning content.
message.toolCallsTool-call argument chunks and finalized tool calls.
message.outputFinal message object after the model call completes.
message.usageToken usage metadata when the provider returns it.
stream.valuesAgent state snapshots.
stream.outputFinal agent state.
stream.subgraphsNested graph runs (sub-agents and plain subgraphs).
stream.extensionsCustom transformer projections.
stream.toolCallsTool execution lifecycle, inputs, output deltas, final output, and errors.
stream.messages yields message streams. Each message stream exposes .text, .reasoning, .toolCalls, .output, and .usage. Async projections can be iterated for live deltas or awaited for final values.

Agent messages

Use stream.messages when you want model output from each LLM call.
const stream = await agent.streamEvents(input, { version: "v3" });

for await (const message of stream.messages) {
  process.stdout.write(`[${message.node}] `);
  for await (const delta of message.text) {
    process.stdout.write(delta);
  }

  const fullMessage = await message.output;
  console.log(fullMessage.content);

  const usage = await message.usage;
  if (usage) {
    console.log(usage);
  }
}
message.output gives you the finalized AI message, including provider-specific content blocks. In TypeScript, use message.usage when you only need token counts or other usage metadata; in Python, read usage from message.output.usage_metadata.

Reasoning content

Reasoning content uses the same shape as text content, but it is available only when the selected model emits reasoning blocks.
const stream = await agent.streamEvents(input, { version: "v3" });

for await (const message of stream.messages) {
  for await (const delta of message.reasoning) {
    process.stdout.write(`[thinking] ${delta}`);
  }

  for await (const delta of message.text) {
    process.stdout.write(delta);
  }
}
See the reasoning guide and your provider’s integration page for model configuration details.

Tool calls

There are two useful tool-call projections:
  • message.tool_calls streams tool-call argument chunks while the model is producing the tool call.
  • stream.tool_calls streams the lifecycle of tool execution after the tool call starts.
const stream = await agent.streamEvents(input, { version: "v3" });

await Promise.all([
  (async () => {
    for await (const message of stream.messages) {
      for await (const chunk of message.toolCalls) {
        console.log("tool call chunk", chunk);
      }
    }
  })(),
  (async () => {
    for await (const call of stream.toolCalls) {
      console.log(call.name, call.input);
      console.log(await call.output, await call.error);
    }
  })(),
]);

Streaming sub-agents

When a create_agent call invokes another create_agent (via a wrapping tool, typically), the inner agent’s events flow at a nested namespace and surface as a handle on stream.subgraphs. Each handle exposes the inner agent’s own .messages, .values, .tool_calls, and .output projections. The name= you pass to create_agent becomes subagent.graph_name (Python) / subagent.name (JS), which lets you filter and label per agent. Every nested CompiledStateGraph shows up on stream.subgraphscreate_agent instances are one specific kind. Filter on the name to act only on the ones you care about.
import { createAgent, tool } from "langchain";
import { z } from "zod";

const getWeather = tool(
  async ({ city }) => `It's always sunny in ${city}!`,
  { name: "get_weather", schema: z.object({ city: z.string() }) }
);

const weatherAgent = createAgent({
  model: "openai:gpt-5.4",
  tools: [getWeather],
  name: "weather_agent",
});

const callWeather = tool(
  async ({ query }) => {
    const result = await weatherAgent.invoke({
      messages: [{ role: "user", content: query }],
    });
    return result.messages.at(-1)?.text ?? "";
  },
  { name: "call_weather", schema: z.object({ query: z.string() }) }
);

const supervisor = createAgent({
  model: "openai:gpt-5.4",
  tools: [callWeather],
  name: "supervisor",
});

const stream = await supervisor.streamEvents(
  { messages: [{ role: "user", content: "What's the weather in Boston?" }] },
  { version: "v3" }
);

for await (const subagent of stream.subgraphs) {
  if (subagent.name !== "weather_agent") continue;
  process.stdout.write(`${subagent.name}: `);
  for await (const message of subagent.messages) {
    for await (const token of message.text) {
      process.stdout.write(token);
    }
  }
  process.stdout.write("\n");
}
The same projection covers plain StateGraph subgraphs invoked from a tool — set name= on .compile(name=...) to get a label in subagent.graph_name. There’s no separate sub-agent-only projection; the filter is what you write into your loop.

State and final output

Use stream.values for state snapshots and stream.output for the final agent state.
const stream = await agent.streamEvents(input, { version: "v3" });

for await (const snapshot of stream.values) {
  console.log(snapshot);
}

const finalState = await stream.output;

Multiple projections

Use concurrent consumers when you want multiple projections in JavaScript:
const stream = await agent.streamEvents(input, { version: "v3" });

await Promise.all([
  (async () => {
    for await (const message of stream.messages) {
      console.log(await message.text);
    }
  })(),
  (async () => {
    for await (const call of stream.toolCalls) {
      console.log(call.name, call.input);
    }
  })(),
]);
To access channels that aren’t exposed as typed projections, or to inspect the full event envelope, iterate raw protocol events:
for await (const event of stream) {
  console.log(event.method, event.params.namespace, event.params.data);
}

Custom updates

Use custom stream transformers when your application needs a projection that is not built in, such as retrieval progress, artifacts, or domain-specific events.
const stream = await agent.streamEvents(input, {
  version: "v3",
  transformers: [toolActivityTransformer],
});

for await (const activity of stream.extensions.toolActivity) {
  console.log(activity);
}
See Build your own projection for the transformer contract.