url
stringlengths
30
161
markdown
stringlengths
27
670k
last_modified
stringclasses
1 value
https://js.langchain.com/v0.2/docs/how_to/custom_tools
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create custom Tools On this page How to create custom Tools ========================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain tools](/v0.2/docs/concepts#tools) * [Agents](/v0.2/docs/concepts/#agents) When constructing your own agent, you will need to provide it with a list of Tools that it can use. While LangChain includes some prebuilt tools, it can often be more useful to use tools that use custom logic. This guide will walk you through some ways you can create custom tools. The biggest difference here is that the first function requires an object with multiple input fields, while the second one only accepts an object with a single field. Some older agents only work with functions that require single inputs, so it’s important to understand the distinction. `tool` function[​](#tool-function "Direct link to tool-function") ----------------------------------------------------------------- Only available in `@langchain/core` version 0.2.7 and above. The [`tool`](https://api.js.langchain.com/classes/langchain_core_tools.tool.html) wrapper function is a convenience method for turning a JavaScript function into a tool. It requires the function itself along with some additional arguments that define your tool. The most important are: * The tool’s `name`, which the LLM will use as context as well as to reference the tool * An optional, but recommended `description`, which the LLM will use as context to know when to use the tool * A `schema`, which defines the shape of the tool’s input The `tool` function will return an instance of the [`StructuredTool`](https://api.js.langchain.com/classes/langchain_core_tools.StructuredTool.html) class, so it is compatible with all the existing tool calling infrastructure in the LangChain library. import { z } from "zod";import { tool } from "@langchain/core/tools";const adderSchema = z.object({ a: z.number(), b: z.number(),});const adderTool = tool( async (input): Promise<string> => { const sum = input.a + input.b; return `The sum of ${input.a} and ${input.b} is ${sum}`; }, { name: "adder", description: "Adds two numbers together", schema: adderSchema, });await adderTool.invoke({ a: 1, b: 2 }); The sum of 1 and 2 is 3 `DynamicStructuredTool`[​](#dynamicstructuredtool "Direct link to dynamicstructuredtool") ----------------------------------------------------------------------------------------- You can also use the [`DynamicStructuredTool`](https://api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) class to declare tools. Here’s an example - note that tools must always return strings! import { DynamicStructuredTool } from "@langchain/core/tools";import { z } from "zod";const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "multiply two numbers together", schema: z.object({ a: z.number().describe("the first number to multiply"), b: z.number().describe("the second number to multiply"), }), func: async ({ a, b }: { a: number; b: number }) => { return (a * b).toString(); },});await multiplyTool.invoke({ a: 8, b: 9 }); "72" `DynamicTool`[​](#dynamictool "Direct link to dynamictool") ----------------------------------------------------------- For older agents that require tools which accept only a single input, you can pass the relevant parameters to the [`DynamicTool`](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicTool.html) class. This is useful when working with older agents that only support tools that accept a single input. In this case, no schema is required: import { DynamicTool } from "@langchain/core/tools";const searchTool = new DynamicTool({ name: "search", description: "look things up online", func: async (_input: string) => { return "LangChain"; },});await searchTool.invoke("foo"); "LangChain" Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now seen a few ways to create custom tools in LangChain. Next, you might be interested in learning [how to use a chat model to call tools](/v0.2/docs/how_to/tool_calling/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to write a custom retriever class ](/v0.2/docs/how_to/custom_retriever)[ Next How to debug your LLM apps ](/v0.2/docs/how_to/debugging) * [`tool` function](#tool-function) * [`DynamicStructuredTool`](#dynamicstructuredtool) * [`DynamicTool`](#dynamictool) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/document_loader_custom
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to write a custom document loader On this page How to write a custom document loader ===================================== If you want to implement your own Document Loader, you have a few options. ### Subclassing `BaseDocumentLoader`[​](#subclassing-basedocumentloader "Direct link to subclassing-basedocumentloader") You can extend the `BaseDocumentLoader` class directly. The `BaseDocumentLoader` class provides a few convenience methods for loading documents from a variety of sources. abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;} ### Subclassing `TextLoader`[​](#subclassing-textloader "Direct link to subclassing-textloader") If you want to load documents from a text file, you can extend the `TextLoader` class. The `TextLoader` class takes care of reading the file, so all you have to do is implement a parse method. abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;} ### Subclassing `BufferLoader`[​](#subclassing-bufferloader "Direct link to subclassing-bufferloader") If you want to load documents from a binary file, you can extend the `BufferLoader` class. The `BufferLoader` class takes care of reading the file, so all you have to do is implement a parse method. abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load CSV data ](/v0.2/docs/how_to/document_loader_csv)[ Next How to load data from a directory ](/v0.2/docs/how_to/document_loader_directory) * [Subclassing `BaseDocumentLoader`](#subclassing-basedocumentloader) * [Subclassing `TextLoader`](#subclassing-textloader) * [Subclassing `BufferLoader`](#subclassing-bufferloader)
null
https://js.langchain.com/v0.2/docs/how_to/document_loader_csv
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load CSV data On this page How to load CSV data ==================== > A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. Load CSV data with a single row per document. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install d3-dsv@2 yarn add d3-dsv@2 pnpm add d3-dsv@2 Usage, extracting all columns[​](#usage-extracting-all-columns "Direct link to Usage, extracting all columns") -------------------------------------------------------------------------------------------------------------- Example CSV file: id,text1,This is a sentence.2,This is another sentence. Example code: import { CSVLoader } from "@langchain/community/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence.", },]*/ Usage, extracting a single column[​](#usage-extracting-a-single-column "Direct link to Usage, extracting a single column") -------------------------------------------------------------------------------------------------------------------------- Example CSV file: id,text1,This is a sentence.2,This is another sentence. Example code: import { CSVLoader } from "@langchain/community/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", },]*/ * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to debug your LLM apps ](/v0.2/docs/how_to/debugging)[ Next How to write a custom document loader ](/v0.2/docs/how_to/document_loader_custom) * [Setup](#setup) * [Usage, extracting all columns](#usage-extracting-all-columns) * [Usage, extracting a single column](#usage-extracting-a-single-column)
null
https://js.langchain.com/v0.2/docs/how_to/debugging
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to debug your LLM apps On this page How to debug your LLM apps ========================== Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created. Here are a few different tools and functionalities to aid in debugging. Tracing[​](#tracing "Direct link to Tracing") --------------------------------------------- Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see: import { ChatAnthropic } from "@langchain/anthropic";import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [new TavilySearchResults(), new Calculator()];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result); #### API Reference: * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search` * [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` { input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: 'So Christopher Nolan, the director of the 2023 film Oppenheimer, is 53 years old, which is approximately 19,345 days old (assuming 365 days per year).'} We don't get much output, but since we set up LangSmith we can easily see what happened under the hood: [https://smith.langchain.com/public/fd3a4aa1-dfea-4d17-9d44-a306e7b230d3/r](https://smith.langchain.com/public/fd3a4aa1-dfea-4d17-9d44-a306e7b230d3/r) `verbose`[​](#verbose "Direct link to verbose") ----------------------------------------------- If you're prototyping in Jupyter Notebooks or running Node scripts, it can be helpful to print out the intermediate steps of a chain run. There are a number of ways to enable printing at varying degrees of verbosity. ### `{ verbose: true }`[​](#-verbose-true- "Direct link to -verbose-true-") Setting the `verbose` parameter will cause any LangChain component with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs. import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [ new TavilySearchResults({ verbose: true }), new Calculator({ verbose: true }),];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0, verbose: true,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: true,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result); #### API Reference: * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search` * [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` Console output [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?"}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] [0ms] Exiting Chain run with output: { "output": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] [1ms] Exiting Chain run with output: { "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] [1ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [], "agent_scratchpad": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [], "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] [0ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] [1.98s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [1.98s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] [0ms] Exiting Chain run with output: { "output": [ { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] [1.98s] Exiting Chain run with output: { "output": [ { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[agent/action] [1:chain:AgentExecutor] Agent selected action: { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ]}[tool/start] [1:chain:AgentExecutor > 9:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:chain:AgentExecutor > 9:tool:TavilySearchResults] [2.20s] Exiting Tool run with output: "[{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.96643,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.92804,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92404,"raw_content":null},{"title":"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \"I Try to ...","url":"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/","content":"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\nRELATED:\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\nCONNECT  FacebookTwitterInstagram\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\n Subscribe\nEverything Zoomer\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.","score":0.92002,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.91831,"raw_content":null}]"[tool/end] [1:tool:TavilySearchResults] [2.20s] Exiting Tool run with output: "[{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.96643,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.92804,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92404,"raw_content":null},{"title":"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \"I Try to ...","url":"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/","content":"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\nRELATED:\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\nCONNECT  FacebookTwitterInstagram\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\n Subscribe\nEverything Zoomer\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.","score":0.92002,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.91831,"raw_content":null}]"[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap > 13:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap > 13:chain:RunnableLambda] [1ms] Exiting Chain run with output: { "output": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap] [2ms] Exiting Chain run with output: { "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign] [3ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 14:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 14:prompt:ChatPromptTemplate] [2ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 15:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 15:llm:ChatAnthropic] [3.50s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [3.50s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 16:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 16:parser:ToolCallingAgentOutputParser] [1ms] Exiting Chain run with output: { "output": [ { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent] [3.51s] Exiting Chain run with output: { "output": [ { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[agent/action] [1:chain:AgentExecutor] Agent selected action: { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ]}[tool/start] [1:chain:AgentExecutor > 17:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/start] [1:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/end] [1:chain:AgentExecutor > 17:tool:Calculator] [3ms] Exiting Tool run with output: "18980"[tool/end] [1:tool:Calculator] [3ms] Exiting Tool run with output: "18980"[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap > 21:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap > 21:chain:RunnableLambda] [1ms] Exiting Chain run with output: { "output": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap] [2ms] Exiting Chain run with output: { "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign] [4ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 22:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 22:prompt:ChatPromptTemplate] [2ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 23:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE  HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT  FacebookTwitterInstagram\\nSUBSCRIBE  Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE  AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 23:llm:ChatAnthropic] [2.16s] Exiting LLM run with output: { "generations": [ [ { "text": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [2.16s] Exiting LLM run with output: { "generations": [ [ { "text": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 24:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 24:parser:ToolCallingAgentOutputParser] [2ms] Exiting Chain run with output: { "returnValues": { "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)." }, "log": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent] [2.20s] Exiting Chain run with output: { "returnValues": { "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)." }, "log": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."}[chain/end] [1:chain:AgentExecutor] [9.92s] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."} ### `Tool({ ..., verbose: true })`[​](#tool--verbose-true- "Direct link to tool--verbose-true-") You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object). import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [ new TavilySearchResults({ verbose: true }), new Calculator({ verbose: true }),];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0, verbose: false,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: false,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result); #### API Reference: * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search` * [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` Console output [tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [1.95s] Exiting Tool run with output: "[{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.97519,"raw_content":null},{"title":"Oppenheimer's Grandson Reacts to New Christopher Nolan Film | TIME","url":"https://time.com/6297743/oppenheimer-grandson-movie-interview/","content":"July 25, 2023 3:32 PM EDT. M oviegoers turned out in droves this weekend for writer-director Christopher Nolan's new film Oppenheimer, fueling an expectations-shattering domestic box office debut ...","score":0.95166,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.95127,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92204,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.92179,"raw_content":null}]"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Christopher Nolan age"[tool/end] [1:tool:TavilySearchResults] [1.15s] Exiting Tool run with output: "[{"title":"Christopher Nolan - IMDb","url":"https://www.imdb.com/name/nm0634240/","content":"Christopher Nolan is a British-American writer-director-producer of acclaimed films such as Inception, The Dark Knight, and Interstellar. He was born on July 30, 1970, in London, England.","score":0.96627,"raw_content":null},{"title":"Christopher Nolan: Biography, Movie Director, Filmmaker","url":"https://www.biography.com/movies-tv/christopher-nolan","content":"To meet the team, visit our About Us page: https://www.biography.com/about/a43602329/about-us\nFilmmakers\nMatt Damon\nGreta Gerwig\nMartin Scorsese\nBradley Cooper\nJodie Foster\nDodi Fayed\nDrew Barrymore\nRyan Gosling Was Reluctant to Play Barbie’s Ken\nThe Actors in the Most Wes Anderson Movies\n“The Idol” Raises Eyesbrows at Cannes\n41 Inspiring Famous Women in History\nBen Affleck and Matt Damon’s Lifelong Friendship\nA Part of Hearst Digital Media\nWe may earn commission from links on this page, but we only recommend products we back.\n The Dark Knight and Inception\nIn July 2008, Nolan’s Batman sequel, The Dark Knight, opened and set the record as having the highest weekend gross in the United States, at $158 million; Knight went on to become one of the top five highest-grossing films in America. In the fall of 2014, Nolan returned to the big screen with Interstellar, a nearly three-hour sci-fi epic that follows the journey of a team of astronauts seeking a new world for the inhabitants of a besieged Earth. The director's career then traveled into the stratosphere, when he agreed to helm the re-launch of the comic book hero Batman with the 2005 film Batman Begins, starring Christian Bale as the titular character. Built around three storylines offering different perspectives on a dramatic turn of events in 1940, Dunkirk earned mostly rave reviews for its portrayals of the tensions and terrors of war, picking up Golden Globe nominations for Best Motion Picture—Drama and Best Director, as well as an Academy Award nod for Best Director.\n","score":0.95669,"raw_content":null},{"title":"Christopher Nolan - Biography - IMDb","url":"https://www.imdb.com/name/nm0634240/bio/","content":"Learn about the life and career of acclaimed writer-director Christopher Nolan, who was born on July 30, 1970, in London, England. Find out his filmography, awards, family, trivia and more on IMDb.","score":0.91217,"raw_content":null},{"title":"Christopher Nolan - Wikipedia","url":"https://en.wikipedia.org/wiki/Christopher_Nolan","content":"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\".[68]\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \"Inception became a classic almost as soon as it was projected on silver screens\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \"a heterogeneity of conditions of products\" extending from low-budget films to lucrative blockbusters, \"a wide range of genres and settings\" and \"a diversity of styles that trumpet his versatility\".[193]\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \"experimental impulses\" with the demands of mainstream entertainment, describing his oeuvre as \"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \"kept a viable alternate model of big-budget filmmaking alive\", in an era where blockbuster filmmaking has become \"a largely computer-generated art form\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]","score":0.90288,"raw_content":null},{"title":"Christopher Nolan | Biography, Movies, Batman, Oppenheimer, & Facts ...","url":"https://www.britannica.com/biography/Christopher-Nolan-British-director","content":"The sci-fi drama depicted the efforts of a group of scientists to relocate humanity from an Earth vitiated by war and famine to another planet by way of a wormhole. The film turns on this character’s attempt to move past the boundaries of the technology in order to actually plant an idea in a dreamer’s head. His 2023 film Oppenheimer, depicts J. Robert Oppenheimer’s role in the development of the atomic bomb and the later security hearing over his alleged ties to communism. It used a destabilizing reverse-order story line to mirror the fractured mental state of its protagonist, a man with short-term amnesia who is trying to track down the person who murdered his wife. The Dark Knight (2008) leaned even more heavily on the moral and structural decay of its setting, fictional Gotham City, and it revived such classic Batman villains as the Joker (played by Heath Ledger).","score":0.90219,"raw_content":null}]"[tool/start] [1:tool:Calculator] Entering Tool run with input: "(2023 - 1970) * 365"[tool/end] [1:tool:Calculator] [3ms] Exiting Tool run with output: "19345"{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: 'So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is 19,345 days old (assuming 365 days per year).'}MacBook-Pro-4:examples jacoblee$ yarn start examples/src/guides/debugging/simple_agent_verbose_some.ts(node:78812) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("file%3A///Users/jacoblee/langchain/langchainjs/node_modules/tsx/dist/loader.js", pathToFileURL("./"));'(Use `node --trace-warnings ...` to show where the warning was created)[WARN]: You have enabled LangSmith tracing without backgrounding callbacks.[WARN]: If you are not using a serverless environment where you must wait for tracing calls to finish,[WARN]: we suggest setting "process.env.LANGCHAIN_CALLBACKS_BACKGROUND=true" to avoid additional latency.[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [1.76s] Exiting Tool run with output: "[{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.97075,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.9684,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.94834,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.92995,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.92512,"raw_content":null}]"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Christopher Nolan age"[tool/end] [1:tool:TavilySearchResults] [1.69s] Exiting Tool run with output: "[{"title":"Christopher Nolan: Biography, Movie Director, Filmmaker","url":"https://www.biography.com/movies-tv/christopher-nolan","content":"To meet the team, visit our About Us page: https://www.biography.com/about/a43602329/about-us\nFilmmakers\nMatt Damon\nGreta Gerwig\nMartin Scorsese\nBradley Cooper\nJodie Foster\nDodi Fayed\nDrew Barrymore\nRyan Gosling Was Reluctant to Play Barbie’s Ken\nThe Actors in the Most Wes Anderson Movies\n“The Idol” Raises Eyesbrows at Cannes\n41 Inspiring Famous Women in History\nBen Affleck and Matt Damon’s Lifelong Friendship\nA Part of Hearst Digital Media\nWe may earn commission from links on this page, but we only recommend products we back.\n The Dark Knight and Inception\nIn July 2008, Nolan’s Batman sequel, The Dark Knight, opened and set the record as having the highest weekend gross in the United States, at $158 million; Knight went on to become one of the top five highest-grossing films in America. In the fall of 2014, Nolan returned to the big screen with Interstellar, a nearly three-hour sci-fi epic that follows the journey of a team of astronauts seeking a new world for the inhabitants of a besieged Earth. The director's career then traveled into the stratosphere, when he agreed to helm the re-launch of the comic book hero Batman with the 2005 film Batman Begins, starring Christian Bale as the titular character. Built around three storylines offering different perspectives on a dramatic turn of events in 1940, Dunkirk earned mostly rave reviews for its portrayals of the tensions and terrors of war, picking up Golden Globe nominations for Best Motion Picture—Drama and Best Director, as well as an Academy Award nod for Best Director.\n","score":0.96408,"raw_content":null},{"title":"Christopher Nolan - Biography - IMDb","url":"https://www.imdb.com/name/nm0634240/bio/","content":"Learn about the life and career of acclaimed writer-director Christopher Nolan, who was born on July 30, 1970, in London, England. Find out his filmography, awards, family, trivia and more on IMDb.","score":0.95409,"raw_content":null},{"title":"Christopher Nolan - IMDb","url":"https://www.imdb.com/name/nm0634240/","content":"Christopher Nolan is a British-American writer-director-producer of acclaimed films such as Inception, The Dark Knight, and Interstellar. He was born on July 30, 1970, in London, England.","score":0.95401,"raw_content":null},{"title":"Christopher Nolan - Wikipedia","url":"https://en.wikipedia.org/wiki/Christopher_Nolan","content":"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\".[68]\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \"Inception became a classic almost as soon as it was projected on silver screens\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \"a heterogeneity of conditions of products\" extending from low-budget films to lucrative blockbusters, \"a wide range of genres and settings\" and \"a diversity of styles that trumpet his versatility\".[193]\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \"experimental impulses\" with the demands of mainstream entertainment, describing his oeuvre as \"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \"kept a viable alternate model of big-budget filmmaking alive\", in an era where blockbuster filmmaking has become \"a largely computer-generated art form\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]","score":0.93205,"raw_content":null},{"title":"Christopher Nolan | Biography, Movies, Batman, Oppenheimer, & Facts ...","url":"https://www.britannica.com/biography/Christopher-Nolan-British-director","content":"The sci-fi drama depicted the efforts of a group of scientists to relocate humanity from an Earth vitiated by war and famine to another planet by way of a wormhole. The film turns on this character’s attempt to move past the boundaries of the technology in order to actually plant an idea in a dreamer’s head. His 2023 film Oppenheimer, depicts J. Robert Oppenheimer’s role in the development of the atomic bomb and the later security hearing over his alleged ties to communism. It used a destabilizing reverse-order story line to mirror the fractured mental state of its protagonist, a man with short-term amnesia who is trying to track down the person who murdered his wife. The Dark Knight (2008) leaned even more heavily on the moral and structural decay of its setting, fictional Gotham City, and it revived such classic Batman villains as the Joker (played by Heath Ledger).","score":0.90859,"raw_content":null}]" [tool/start] [1:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/end] [1:tool:Calculator] [2ms] Exiting Tool run with output: "18980"{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: '<result>\nTherefore, Christopher Nolan is 18,980 days old.\n</result>'}MacBook-Pro-4:examples jacoblee$ yarn start examples/src/guides/debugging/simple_agent_verbose_some.ts(node:78844) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("file%3A///Users/jacoblee/langchain/langchainjs/node_modules/tsx/dist/loader.js", pathToFileURL("./"));'(Use `node --trace-warnings ...` to show where the warning was created)[WARN]: You have enabled LangSmith tracing without backgrounding callbacks.[WARN]: If you are not using a serverless environment where you must wait for tracing calls to finish,[WARN]: we suggest setting "process.env.LANGCHAIN_CALLBACKS_BACKGROUND=true" to avoid additional latency.[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [2.63s] Exiting Tool run with output: "[{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.95617,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.95378,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.92271,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.91904,"raw_content":null},{"title":"Oppenheimer's Grandson Reacts to New Christopher Nolan Film | TIME","url":"https://time.com/6297743/oppenheimer-grandson-movie-interview/","content":"July 25, 2023 3:32 PM EDT. M oviegoers turned out in droves this weekend for writer-director Christopher Nolan's new film Oppenheimer, fueling an expectations-shattering domestic box office debut ...","score":0.91248,"raw_content":null}]"[tool/start] [1:tool:Calculator] Entering Tool run with input: "(2023 - 1970) * 365"[tool/end] [1:tool:Calculator] [2ms] Exiting Tool run with output: "19345" { input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: "So as of 2023, Christopher Nolan's age is approximately 19,345 days.\n" + '\n' + 'In summary:\n' + '- The 2023 film Oppenheimer was directed by Christopher Nolan\n' + '- Nolan was born on July 30, 1970, making his current age around 53 years old\n' + '- Converted to days, Nolan is approximately 19,345 days old as of 2023'} Other callbacks[​](#other-callbacks "Direct link to Other callbacks") --------------------------------------------------------------------- `Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [`ConsoleCallbackHandler`](https://v02.api.js.langchain.com/classes/langchain_core_tracers_console.ConsoleCallbackHandler.html). You can also implement your own callbacks to execute custom functionality. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create custom Tools ](/v0.2/docs/how_to/custom_tools)[ Next How to load CSV data ](/v0.2/docs/how_to/document_loader_csv) * [Tracing](#tracing) * [`verbose`](#verbose) * [`{ verbose: true }`](#-verbose-true-) * [`Tool({ ..., verbose: true })`](#tool--verbose-true-) * [Other callbacks](#other-callbacks)
null
https://js.langchain.com/v0.2/docs/how_to/document_loader_html
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load HTML On this page How to load HTML ================ The HyperText Markup Language or [HTML](https://en.wikipedia.org/wiki/HTML) is the standard markup language for documents designed to be displayed in a web browser. This covers how to load `HTML` documents into a LangChain [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) objects that we can use downstream. Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/). Head over to the integrations page to find integrations with additional services, such as [FireCrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl). Prerequisites This guide assumes familiarity with the following concepts: * [Documents](/v0.2/docs/concepts#document) * [Document Loaders](/v0.2/docs/concepts#document-loaders) Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community Setup[​](#setup "Direct link to Setup") --------------------------------------- Although Unstructured has an open source offering, you’re still required to provide an API key to access the service. To get everything up and running, follow these two steps: 1. Download & start the Docker container: docker run -p 8000:8000 -d --rm --name unstructured-api downloads.unstructured.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0 1. Get a free API key & API URL [here](https://unstructured.io/api-key), and set it in your environment (as per the Unstructured website, it may take up to an hour to allocate your API key & URL.): export UNSTRUCTURED_API_KEY="..."# Replace with your `Full URL` from the emailexport UNSTRUCTURED_API_URL="https://<ORG_NAME>-<SECRET>.api.unstructuredapp.io/general/v0/general" Loading HTML with Unstructured[​](#loading-html-with-unstructured "Direct link to Loading HTML with Unstructured") ------------------------------------------------------------------------------------------------------------------ import { UnstructuredLoader } from "@langchain/community/document_loaders/fs/unstructured";const filePath = "../../../../libs/langchain-community/src/tools/fixtures/wordoftheday.html";const loader = new UnstructuredLoader(filePath, { apiKey: process.env.UNSTRUCTURED_API_KEY, apiUrl: process.env.UNSTRUCTURED_API_URL,});const data = await loader.load();console.log(data.slice(0, 5)); [ Document { pageContent: 'Word of the Day', metadata: { category_depth: 0, languages: [Array], filename: 'wordoftheday.html', filetype: 'text/html', category: 'Title' } }, Document { pageContent: ': April 10, 2023', metadata: { emphasized_text_contents: [Array], emphasized_text_tags: [Array], languages: [Array], parent_id: 'b845e60d85ff7d10abda4e5f9a37eec8', filename: 'wordoftheday.html', filetype: 'text/html', category: 'UncategorizedText' } }, Document { pageContent: 'foible', metadata: { category_depth: 1, languages: [Array], parent_id: 'b845e60d85ff7d10abda4e5f9a37eec8', filename: 'wordoftheday.html', filetype: 'text/html', category: 'Title' } }, Document { pageContent: 'play', metadata: { category_depth: 0, link_texts: [Array], link_urls: [Array], link_start_indexes: [Array], languages: [Array], filename: 'wordoftheday.html', filetype: 'text/html', category: 'Title' } }, Document { pageContent: 'noun', metadata: { category_depth: 0, emphasized_text_contents: [Array], emphasized_text_tags: [Array], languages: [Array], filename: 'wordoftheday.html', filetype: 'text/html', category: 'Title' } }] * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load data from a directory ](/v0.2/docs/how_to/document_loader_directory)[ Next How to load Markdown ](/v0.2/docs/how_to/document_loader_markdown) * [Installation](#installation) * [Setup](#setup) * [Loading HTML with Unstructured](#loading-html-with-unstructured)
null
https://js.langchain.com/v0.2/docs/how_to/document_loader_markdown
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load Markdown On this page How to load Markdown ==================== [Markdown](https://en.wikipedia.org/wiki/Markdown) is a lightweight markup language for creating formatted text using a plain-text editor. Here we cover how to load `Markdown` documents into LangChain [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) objects that we can use downstream. We will cover: * Basic usage; * Parsing of Markdown into elements such as titles, list items, and text. LangChain implements an [UnstructuredLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredLoader.html) class. Prerequisites This guide assumes familiarity with the following concepts: * [Documents](/v0.2/docs/concepts#document) * [Document Loaders](/v0.2/docs/concepts#document-loaders) Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community Setup[​](#setup "Direct link to Setup") --------------------------------------- Although Unstructured has an open source offering, you’re still required to provide an API key to access the service. To get everything up and running, follow these two steps: 1. Download & start the Docker container: docker run -p 8000:8000 -d --rm --name unstructured-api downloads.unstructured.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0 1. Get a free API key & API URL [here](https://unstructured.io/api-key), and set it in your environment (as per the Unstructured website, it may take up to an hour to allocate your API key & URL.): export UNSTRUCTURED_API_KEY="..."# Replace with your `Full URL` from the emailexport UNSTRUCTURED_API_URL="https://<ORG_NAME>-<SECRET>.api.unstructuredapp.io/general/v0/general" Basic usage will ingest a Markdown file to a single document. Here we demonstrate on LangChain’s readme: import { UnstructuredLoader } from "@langchain/community/document_loaders/fs/unstructured";const markdownPath = "../../../../README.md";const loader = new UnstructuredLoader(markdownPath, { apiKey: process.env.UNSTRUCTURED_API_KEY, apiUrl: process.env.UNSTRUCTURED_API_URL,});const data = await loader.load();console.log(data.slice(0, 5)); [ Document { pageContent: '🦜️🔗 LangChain.js', metadata: { languages: [Array], filename: 'README.md', filetype: 'text/markdown', category: 'Title' } }, Document { pageContent: '⚡ Building applications with LLMs through composability ⚡', metadata: { languages: [Array], filename: 'README.md', filetype: 'text/markdown', category: 'Title' } }, Document { pageContent: 'Looking for the Python version? Check out LangChain.', metadata: { languages: [Array], parent_id: '7ea17bcb17b10f303cbb93b4cb95de93', filename: 'README.md', filetype: 'text/markdown', category: 'NarrativeText' } }, Document { pageContent: 'To help you ship LangChain apps to production faster, check out LangSmith.\n' + 'LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.\n' + 'Fill out this form to get on the waitlist or speak with our sales team.', metadata: { languages: [Array], parent_id: '7ea17bcb17b10f303cbb93b4cb95de93', filename: 'README.md', filetype: 'text/markdown', category: 'NarrativeText' } }, Document { pageContent: '⚡️ Quick Install', metadata: { languages: [Array], filename: 'README.md', filetype: 'text/markdown', category: 'Title' } }] Retain Elements[​](#retain-elements "Direct link to Retain Elements") --------------------------------------------------------------------- Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `chunkingStrategy: "by_title"`. const loader = new UnstructuredLoader(markdownPath, { chunkingStrategy: "by_title",});const data = await loader.load();console.log(`Number of documents: ${data.length}\n`);for (const doc of data.slice(0, 2)) { console.log(doc); console.log("\n");} Number of documents: 13Document { pageContent: '🦜️🔗 LangChain.js\n' + '\n' + '⚡ Building applications with LLMs through composability ⚡\n' + '\n' + 'Looking for the Python version? Check out LangChain.\n' + '\n' + 'To help you ship LangChain apps to production faster, check out LangSmith.\n' + 'LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.\n' + 'Fill out this form to get on the waitlist or speak with our sales team.', metadata: { filename: 'README.md', filetype: 'text/markdown', languages: [ 'eng' ], orig_elements: 'eJzNUtuO0zAQ/ZVRnquSS3PjBcGyPHURgr5tV2hijxNTJ45ip0u14t8Zp1y6CCF4ACFLlufuc+bcPkRkqKfBv9cyegpREWNZosxS0RRVzmeTCiFlnmRUFZmQ0QqinjxK9Mj5D5HShgbsKRS/vX7+8uZ63S9ZIeBP4xLw9NE/6XxvQsDg0M7YkuPIbURDG919Wp1zQu5+llVGfMta7GdFsVo8MniSErZcfdWhHtYfXOj2dcROe0MRN/oRUUmYlI1o+EpilcWZaJo6azaiqXNJdfYvEKUFJvBi1kbqoQUcR6MFem0HB/fad7Dd3jjw3WTntgNh+9E6bLTR/gTn4t9CmhHFTc1w80oKSUlTpFWaFKWsVR5nFf0dpOwdcfoDvi+p2Vp7CJQoOzF+gjcn39kBjjQ5ZucZXHUkDmBnf7H3Sy5e4zQxkUfahYY/4UQqVcZJpSpspKqSMslVllWJzDdMC6XVf8jJzkJHZoSTncF1evwOPSiHdWJhnKycRRAQKHSephWIR0y961lW6/3w7Q3aAcI8aKVJgqQjGTvSBKNBz+T3ywaaLwpdgSfnlwcOEno7aG+nsCcW6iP58ohX2phlru94xtKLf9iSB/5d2Ok9smC1Y3sCNxIezpq3M5toiAER9r/a6t1n6BJ/zg==', category: 'CompositeElement' }}Document { pageContent: '⚡️ Quick Install\n' + '\n' + 'You can use npm, yarn, or pnpm to install LangChain.js\n' + '\n' + 'npm install -S langchain or yarn add langchain or pnpm add langchain\n' + '\n' + 'typescript\n' + 'import { ChatOpenAI } from "langchain/chat_models/openai";\n' + '\n' + '🌐 Supported Environments\n' + '\n' + 'LangChain is written in TypeScript and can be used in:\n' + '\n' + 'Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x\n' + '\n' + 'Cloudflare Workers\n' + '\n' + 'Vercel / Next.js (Browser, Serverless and Edge functions)\n' + '\n' + 'Supabase Edge Functions\n' + '\n' + 'Browser\n' + '\n' + 'Deno', metadata: { filename: 'README.md', filetype: 'text/markdown', languages: [ 'eng' ], orig_elements: 'eJzNlm1v2zYQx7/KQa9WwE1Iik/qXnWpB2RoM2wOOgx1URzJY6pVogyJTlME/e6j3KZIhgBzULjIG0Li3VH+/e/BfHNdUUc9pfyuDdUzqGzUjUUda1ZbL7R1UQetnNdMK9swVy2g6iljwIzF/7qKbUcJe5qD/1w+f/FqedSH2Ws25E+bnSHTVT5+n/tuNnSYLrZ4QVOxvKkoXVRvPy+++My+663QyNfbSCzCH9vWf4DTNGXsdsE3J563uaOqxP0XIDSxCdobSZIYd9w7JpQlLU3TaKf4YQDK7gbHB8h4m/jvYQseE2wngrTpF/AJx7SAYYRNeYU8QPtFAHhZvnzyHtt09M90W40zHEfM7SWdz0fep0otuUISLBqMjfNFjMYzI6SWFFWQj1CVGf2G++kK5uP9jD7rMgsEGMLd3Z1ad3YfpJHWsubSchGQeNRItUGPElF7wck2hy/9OWbyY7vJ69T2m2HMcA0l3/n3DaXnp/AZ4jj0sK6+AR6XNb/rh0DddDwUL2zX1c97NUpjVAEOxkh0tbOaN1qU1vG8VtYGe6CSuNvpwda+rJEzWG03MzAFWKbLdhzS/FOnvUhcdChlNC6iKBWuJVrCGMhxIaKMP6i4/1fP2+jfGhnaCT6Obc5UHhOcl4+vdhUAmMJuKjiaB0Mo1mcPKmdBvlFWK6ZMaXfNI2ojIvNORMsUHWiSf5cqZ6WOy2SDn5arVzv+k6Hvh/Tb6gk8BW6PrhbAm3kV7Ojqthgv2ymfZurvrQ4hvRLCSaUEj8YG77TzQTNriYv6B/0hPEiHk24oTdGVePhrGD/QOO0LyxRHKZivAxldS41akzXcxELPm/oxJv01jZ46OIazsrHL/i/j8HGicQErGi9p7GiadtWwDBcEcZt8boc0PdlXE9KlAoSkZh4PtUBZ5oRjTAbiSgd3oLn+XZqUYYgOy3Vgh/zrDfK+xA0rqY6GaQrGo5JM1azcgawzjeOa2CMk/przvXMayvXQEA8meEmCsxiDrkO54/iAVvtHSPiC0nA/3tt/AY+igwk=', category: 'CompositeElement' }} Note that in this case we recover just one distinct element type: const categories = new Set(data.map((document) => document.metadata.category));console.log(categories); Set(1) { 'CompositeElement' } * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load HTML ](/v0.2/docs/how_to/document_loader_html)[ Next How to load PDF files ](/v0.2/docs/how_to/document_loader_pdf) * [Installation](#installation) * [Setup](#setup) * [Retain Elements](#retain-elements)
null
https://js.langchain.com/v0.2/docs/how_to/document_loader_pdf
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load PDF files On this page How to load PDF files ===================== > [Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. This covers how to load `PDF` documents into the Document format that we use downstream. By default, one document will be created for each page in the PDF file. You can change this behavior by setting the `splitPages` option to `false`. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install pdf-parse yarn add pdf-parse pnpm add pdf-parse Usage, one document per page[​](#usage-one-document-per-page "Direct link to Usage, one document per page") ----------------------------------------------------------------------------------------------------------- import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";// Or, in web environments:// import { WebPDFLoader } from "@langchain/community/document_loaders/web/pdf";// const blob = new Blob(); // e.g. from a file input// const loader = new WebPDFLoader(blob);const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load(); Usage, one document per file[​](#usage-one-document-per-file "Direct link to Usage, one document per file") ----------------------------------------------------------------------------------------------------------- import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load(); Usage, custom `pdfjs` build[​](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build") --------------------------------------------------------------------------------------------------- By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object. In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build. * npm * Yarn * pnpm npm install pdfjs-dist yarn add pdfjs-dist pnpm add pdfjs-dist import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),}); Eliminating extra spaces[​](#eliminating-extra-spaces "Direct link to Eliminating extra spaces") ------------------------------------------------------------------------------------------------ PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this: import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { parsedItemSeparator: "",});const docs = await loader.load(); * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load Markdown ](/v0.2/docs/how_to/document_loader_markdown)[ Next How to load JSON data ](/v0.2/docs/how_to/document_loaders_json) * [Setup](#setup) * [Usage, one document per page](#usage-one-document-per-page) * [Usage, one document per file](#usage-one-document-per-file) * [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build) * [Eliminating extra spaces](#eliminating-extra-spaces)
null
https://js.langchain.com/v0.2/docs/how_to/document_loaders_json
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to load JSON data On this page How to load JSON data ===================== > [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). > [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value. The JSON loader uses [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target. ### No JSON pointer example[​](#no-json-pointer-example "Direct link to No JSON pointer example") The most simple way of using it is to specify no JSON pointer. The loader will load all strings it finds in the JSON object. Example JSON file: { "texts": ["This is a sentence.", "This is another sentence."]} Example code: import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/ ### Using JSON pointer example[​](#using-json-pointer-example "Direct link to Using JSON pointer example") You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from. In this example, we want to only extract information from "from" and "surname" entries. { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }} Example code: import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { pageContent: 'LinkedIn Job', metadata: { source: './src/json/example.json', line: 1 } }, Document { pageContent: 'LinkedIn Job2', metadata: { source: './src/json/example.json', line: 2 } }, Document { pageContent: 'bob', metadata: { source: './src/json/example.json', line: 3 } }]**/ * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load PDF files ](/v0.2/docs/how_to/document_loader_pdf)[ Next How to combine results from multiple retrievers ](/v0.2/docs/how_to/ensemble_retriever) * [No JSON pointer example](#no-json-pointer-example) * [Using JSON pointer example](#using-json-pointer-example)
null
https://js.langchain.com/v0.2/docs/how_to/ensemble_retriever
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to combine results from multiple retrievers On this page How to combine results from multiple retrievers =============================================== Prerequisites This guide assumes familiarity with the following concepts: * [Documents](/v0.2/docs/concepts#document) * [Retrievers](/v0.2/docs/concepts#retrievers) The [EnsembleRetriever](https://api.js.langchain.com/classes/langchain_retrievers_ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.js.langchain.com/classes/langchain_core_retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm. By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. One useful pattern is to combine a keyword matching retriever with a dense retriever (like embedding similarity), because their strengths are complementary. This can be considered a form of "hybrid search". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity. Below we demonstrate ensembling of a [simple custom retriever](/v0.2/docs/how_to/custom_retriever/) that simply returns documents that directly contain the input query with a retriever derived from a [demo, in-memory, vector store](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html). import { EnsembleRetriever } from "langchain/retrievers/ensemble";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { BaseRetriever, BaseRetrieverInput } from "@langchain/core/retrievers";import { Document } from "@langchain/core/documents";class SimpleCustomRetriever extends BaseRetriever { lc_namespace = []; documents: Document[]; constructor(fields: { documents: Document[] } & BaseRetrieverInput) { super(fields); this.documents = fields.documents; } async _getRelevantDocuments(query: string): Promise<Document[]> { return this.documents.filter((document) => document.pageContent.includes(query) ); }}const docs1 = [ new Document({ pageContent: "I like apples", metadata: { source: 1 } }), new Document({ pageContent: "I like oranges", metadata: { source: 1 } }), new Document({ pageContent: "apples and oranges are fruits", metadata: { source: 1 }, }),];const keywordRetriever = new SimpleCustomRetriever({ documents: docs1 });const docs2 = [ new Document({ pageContent: "You like apples", metadata: { source: 2 } }), new Document({ pageContent: "You like oranges", metadata: { source: 2 } }),];const vectorstore = await MemoryVectorStore.fromDocuments( docs2, new OpenAIEmbeddings());const vectorstoreRetriever = vectorstore.asRetriever();const retriever = new EnsembleRetriever({ retrievers: [vectorstoreRetriever, keywordRetriever], weights: [0.5, 0.5],});const query = "apples";const retrievedDocs = await retriever.invoke(query);console.log(retrievedDocs);/* [ Document { pageContent: 'You like apples', metadata: { source: 2 } }, Document { pageContent: 'I like apples', metadata: { source: 1 } }, Document { pageContent: 'You like oranges', metadata: { source: 2 } }, Document { pageContent: 'apples and oranges are fruits', metadata: { source: 1 } } ]*/ #### API Reference: * [EnsembleRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_ensemble.EnsembleRetriever.html) from `langchain/retrievers/ensemble` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [BaseRetriever](https://v02.api.js.langchain.com/classes/langchain_core_retrievers.BaseRetriever.html) from `@langchain/core/retrievers` * [BaseRetrieverInput](https://v02.api.js.langchain.com/interfaces/langchain_core_retrievers.BaseRetrieverInput.html) from `@langchain/core/retrievers` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to combine results from multiple retrievers. Next, check out some other retrieval how-to guides, such as how to [improve results using multiple embeddings per document](/v0.2/docs/how_to/multi_vector) or how to [create your own custom retriever](/v0.2/docs/how_to/custom_retriever). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to load JSON data ](/v0.2/docs/how_to/document_loaders_json)[ Next How to select examples by length ](/v0.2/docs/how_to/example_selectors_length_based) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/example_selectors_similarity
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by similarity On this page How to select examples by similarity ==================================== Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Example selectors](/v0.2/docs/how_to/example_selectors) * [Vector stores](/v0.2/docs/concepts#vectorstores) This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. The fields of the examples object will be used as parameters to format the `examplePrompt` passed to the `FewShotPromptTemplate`. Each example should therefore contain all required fields for the example prompt you are using. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate( "Input: {input}\nOutput: {output}");// Create a SemanticSimilarityExampleSelector that will be used to select the examples.const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 });// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"],});// Input is about the weather, so should select eg. the sunny/gloomy exampleconsole.log(await dynamicPrompt.format({ adjective: "rainy" }));/* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output:*/// Input is a measurement, so should select the tall/short exampleconsole.log(await dynamicPrompt.format({ adjective: "large" }));/* Give the antonym of every input Input: tall Output: short Input: large Output:*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts` * [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors` By default, each field in the examples object is concatenated together, embedded, and stored in the vectorstore for later similarity search against user queries. If you only want to embed specific keys (e.g., you only want to search for examples that have a similar query to the one the user provides), you can pass an `inputKeys` array in the final `options` parameter. Loading from an existing vectorstore[​](#loading-from-an-existing-vectorstore "Direct link to Loading from an existing vectorstore") ------------------------------------------------------------------------------------------------------------------------------------ You can also use a pre-initialized vector store by passing an instance to the `SemanticSimilarityExampleSelector` constructor directly, as shown below. You can also add more examples via the `addExample` method: // Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `galbi`, }, { query: "healthy food", output: `schnitzel`, }, { query: "foo", output: `bar`, },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question: {query}", inputVariables: ["query"],});const formattedValue = await dynamicPrompt.format({ query: "What is a healthy food?",});console.log(formattedValue);/*Answer the user's question, using the below examples as reference:<example> <user_input> healthy </user_input> <output> galbi </output></example><example> <user_input> healthy </user_input> <output> schnitzel </output></example>User question: What is a healthy food?*/const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is a healthy food?" });console.log(result);/* AIMessage { content: 'A healthy food can be galbi or schnitzel.', additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts` * [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors` Metadata filtering[​](#metadata-filtering "Direct link to Metadata filtering") ------------------------------------------------------------------------------ When adding examples, each field is available as metadata in the produced document. If you would like further control over your search space, you can add extra fields to your examples and pass a `filter` parameter when initializing your selector: // Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { Document } from "@langchain/core/documents";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"], // Filter type will depend on your specific vector store. // See the section of the docs for the specific vector store you are using. filter: (doc: Document) => doc.metadata.food_type === "vegetable",});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'One type of healthy food is lettuce.', additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors` Custom vectorstore retrievers[​](#custom-vectorstore-retrievers "Direct link to Custom vectorstore retrievers") --------------------------------------------------------------------------------------------------------------- You can also pass a vectorstore retriever instead of a vectorstore. One way this could be useful is if you want to use retrieval besides similarity search such as maximal marginal relevance: /* eslint-disable @typescript-eslint/no-non-null-assertion */// Requires a vectorstore that supports maximal marginal relevance searchimport { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const pineconeVectorstore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });const pineconeMmrRetriever = pineconeVectorstore.asRetriever({ searchType: "mmr", k: 2,});const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStoreRetriever: pineconeMmrRetriever, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'lettuce.', additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts` * [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned a bit about using similarity in an example selector. Next, check out this guide on how to use a [length-based example selector](/v0.2/docs/how_to/example_selectors_length_based). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by length ](/v0.2/docs/how_to/example_selectors_length_based)[ Next How to use reference examples ](/v0.2/docs/how_to/extraction_examples) * [Loading from an existing vectorstore](#loading-from-an-existing-vectorstore) * [Metadata filtering](#metadata-filtering) * [Custom vectorstore retrievers](#custom-vectorstore-retrievers) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/example_selectors_length_based
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to select examples by length On this page How to select examples by length ================================ Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Example selectors](/v0.2/docs/how_to/example_selectors) This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { LengthBasedExampleSelector } from "@langchain/core/example_selectors";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */} #### API Reference: * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts` * [LengthBasedExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.LengthBasedExampleSelector.html) from `@langchain/core/example_selectors` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned a bit about using a length based example selector. Next, check out this guide on how to use a [similarity based example selector](/v0.2/docs/how_to/example_selectors_similarity). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to combine results from multiple retrievers ](/v0.2/docs/how_to/ensemble_retriever)[ Next How to select examples by similarity ](/v0.2/docs/how_to/example_selectors_similarity) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/extraction_examples
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use reference examples On this page How to use reference examples ============================= Prerequisites This guide assumes familiarity with the following: * [Extraction](/v0.2/docs/tutorials/extraction) The quality of extraction can often be improved by providing reference examples to the LLM. tip While this tutorial focuses how to use examples with a tool calling model, this technique is generally applicable, and will work also with JSON more or prompt based techniques. We’ll use OpenAI’s GPT-4 this time for their robust support for `ToolMessages`: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai zod uuid yarn add @langchain/openai zod uuid pnpm add @langchain/openai zod uuid Let’s define a prompt: import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ new MessagesPlaceholder("examples"), // ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ ["human", "{text}"],]); Test out the template: import { HumanMessage } from "@langchain/core/messages";const promptValue = await prompt.invoke({ text: "this is some text", examples: [new HumanMessage("testing 1 2 3")],});promptValue.toChatMessages(); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "testing 1 2 3", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "testing 1 2 3", name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }] Define the schema[​](#define-the-schema "Direct link to Define the schema") --------------------------------------------------------------------------- Let’s re-use the people schema from the quickstart. import { z } from "zod";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),}); Define reference examples[​](#define-reference-examples "Direct link to Define reference examples") --------------------------------------------------------------------------------------------------- Examples can be defined as a list of input-output pairs. Each example contains an example `input` text and an example `output` showing what should be extracted from the text. info The below example is a bit more advanced - the format of the example needs to match the API used (e.g., tool calling or JSON mode etc.). Here, the formatted examples will match the format expected for the OpenAI tool calling API since that’s what we’re using. To provide reference examples to the model, we will mock out a fake chat history containing successful usages of the given tool. Because the model can choose to call multiple tools at once (or the same tool multiple times), the example’s outputs are an array: import { AIMessage, type BaseMessage, HumanMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuid } from "uuid";type OpenAIToolCall = { id: string; type: "function"; function: { name: string; arguments: string; };};type Example = { input: string; toolCallOutputs: Record<string, any>[];};/** * This function converts an example into a list of messages that can be fed into an LLM. * * This code serves as an adapter that transforms our example into a list of messages * that can be processed by a chat model. * * The list of messages for each example includes: * * 1) HumanMessage: This contains the content from which information should be extracted. * 2) AIMessage: This contains the information extracted by the model. * 3) ToolMessage: This provides confirmation to the model that the tool was requested correctly. * * The inclusion of ToolMessage is necessary because some chat models are highly optimized for agents, * making them less suitable for an extraction use case. */function toolExampleToMessages(example: Example): BaseMessage[] { const openAIToolCalls: OpenAIToolCall[] = example.toolCallOutputs.map( (output) => { return { id: uuid(), type: "function", function: { // The name of the function right now corresponds // to the passed name. name: "extract", arguments: JSON.stringify(output), }, }; } ); const messages: BaseMessage[] = [ new HumanMessage(example.input), new AIMessage({ content: "", additional_kwargs: { tool_calls: openAIToolCalls }, }), ]; const toolMessages = openAIToolCalls.map((toolCall, i) => { // Return the mocked successful result for a given tool call. return new ToolMessage({ content: "You have correctly called this tool.", tool_call_id: toolCall.id, }); }); return messages.concat(toolMessages);} Next let’s define our examples and then convert them into message format. const examples: Example[] = [ { input: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", toolCallOutputs: [{}], }, { input: "Fiona traveled far from France to Spain.", toolCallOutputs: [ { name: "Fiona", }, ], },];const exampleMessages = [];for (const example of examples) { exampleMessages.push(...toolExampleToMessages(example));} 6 Let’s test out the prompt const promptValue = await prompt.invoke({ text: "this is some text", examples: exampleMessages,});promptValue.toChatMessages(); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "8fa4d00d-801f-470e-8737-51ee9dc82259", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Fiona traveled far from France to Spain.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Fiona traveled far from France to Spain.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "14ad6217-fcbd-47c7-9006-82f612e36c66", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }] Create an extractor[​](#create-an-extractor "Direct link to Create an extractor") --------------------------------------------------------------------------------- Here, we’ll create an extractor using **gpt-4**. import { ChatOpenAI } from "@langchain/openai";// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ // Consider benchmarking with the best model you can to get // a sense of the best possible quality. model: "gpt-4-0125-preview", temperature: 0,});// For function/tool calling, we can also supply an name for the schema// to give the LLM additional context about what it's extracting.const extractionRunnable = prompt.pipe( llm.withStructuredOutput(peopleSchema, { name: "people" })); Without examples 😿[​](#without-examples "Direct link to Without examples 😿") ------------------------------------------------------------------------------ Notice that even though we’re using `gpt-4`, it’s unreliable with a **very simple** test case! We run it 5 times below to emphasize this: const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, examples: [], }); console.log(result);} { people: [ { name: "earth", hair_color: "grey", height_in_meters: "1" } ]}{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "1 moon" } ] }{ people: [] } With examples 😻[​](#with-examples "Direct link to With examples 😻") --------------------------------------------------------------------- Reference examples help fix the failure! const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, // Example messages from above examples: exampleMessages, }); console.log(result);} { people: [] }{ people: [] }{ people: [] }{ people: [] }{ people: [] } await extractionRunnable.invoke({ text: "My name is Hair-ison. My hair is black. I am 3 meters tall.", examples: exampleMessages,}); { people: [ { name: "Hair-ison", hair_color: "black", height_in_meters: "3" } ]} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to improve extraction quality using few-shot examples. Next, check out some of the other guides in this section, such as [some tips on how to perform extraction on long text](/v0.2/docs/how_to/extraction_long_text). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to select examples by similarity ](/v0.2/docs/how_to/example_selectors_similarity)[ Next How to handle long text ](/v0.2/docs/how_to/extraction_long_text) * [Define the schema](#define-the-schema) * [Define reference examples](#define-reference-examples) * [Create an extractor](#create-an-extractor) * [Without examples 😿](#without-examples) * [With examples 😻](#with-examples) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/extraction_long_text
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle long text On this page How to handle long text ======================= Prerequisites This guide assumes familiarity with the following: * [Extraction](/v0.2/docs/tutorials/extraction) When working with files, like PDFs, you’re likely to encounter text that exceeds your language model’s context window. To process this text, consider these strategies: 1. **Change LLM** Choose a different LLM that supports a larger context window. 2. **Brute Force** Chunk the document, and extract content from each chunk. 3. **RAG** Chunk the document, index the chunks, and only extract content from a subset of chunks that look “relevant”. Keep in mind that these strategies have different trade offs and the best strategy likely depends on the application that you’re designing! Set up[​](#set-up "Direct link to Set up") ------------------------------------------ First, let’s install some required dependencies: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai zod cheerio yarn add @langchain/openai zod cheerio pnpm add @langchain/openai zod cheerio Next, we need some example data! Let’s download an article about [cars from Wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain `Document`. import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";// Only required in a Deno notebook environment to load the peer dep.import "cheerio";const loader = new CheerioWebBaseLoader("https://en.wikipedia.org/wiki/Car");const docs = await loader.load();docs[0].pageContent.length; 97336 Define the schema[​](#define-the-schema "Direct link to Define the schema") --------------------------------------------------------------------------- Here, we’ll define schema to extract key developments from the text. import { z } from "zod";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const keyDevelopmentSchema = z .object({ year: z .number() .describe("The year when there was an important historic development."), description: z .string() .describe("What happened in this year? What was the development?"), evidence: z .string() .describe( "Repeat verbatim the sentence(s) from which the year and description information were extracted" ), }) .describe("Information about a development in the history of cars.");const extractionDataSchema = z .object({ key_developments: z.array(keyDevelopmentSchema), }) .describe( "Extracted information about key developments in the history of cars" );const SYSTEM_PROMPT_TEMPLATE = [ "You are an expert at identifying key historic development in text.", "Only extract important historic developments. Extract nothing if no important information can be found in the text.",].join("\n");// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // Keep on reading through this use case to see how to use examples to improve performance // MessagesPlaceholder('examples'), ["human", "{text}"],]);// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ model: "gpt-4-0125-preview", temperature: 0,});const extractionChain = prompt.pipe( llm.withStructuredOutput(extractionDataSchema)); Brute force approach[​](#brute-force-approach "Direct link to Brute force approach") ------------------------------------------------------------------------------------ Split the documents into chunks such that each chunk fits into the context window of the LLMs. import { TokenTextSplitter } from "langchain/text_splitter";const textSplitter = new TokenTextSplitter({ chunkSize: 2000, chunkOverlap: 20,});// Note that this method takes an array of docsconst splitDocs = await textSplitter.splitDocuments(docs); Use the `.batch` method present on all runnables to run the extraction in **parallel** across each chunk! tip You can often use `.batch()` to parallelize the extractions! If your model is exposed via an API, this will likely speed up your extraction flow. // Limit just to the first 3 chunks// so the code can be re-run quicklyconst firstFewTexts = splitDocs.slice(0, 3).map((doc) => doc.pageContent);const extractionChainParams = firstFewTexts.map((text) => { return { text };});const results = await extractionChain.batch(extractionChainParams, { maxConcurrency: 5,}); ### Merge results[​](#merge-results "Direct link to Merge results") After extracting data from across the chunks, we’ll want to merge the extractions together. const keyDevelopments = results.flatMap((result) => result.key_developments);keyDevelopments.slice(0, 20); [ { year: 0, description: "", evidence: "" }, { year: 1769, description: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle.", evidence: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769." }, { year: 1808, description: "French-born Swiss inventor François Isaac de Rivaz designed and constructed the first internal combu"... 25 more characters, evidence: "French-born Swiss inventor François Isaac de Rivaz designed and constructed the first internal combu"... 33 more characters }, { year: 1886, description: "German inventor Carl Benz patented his Benz Patent-Motorwagen, inventing the modern car—a practical,"... 40 more characters, evidence: "The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when German"... 56 more characters }, { year: 1908, description: "The 1908 Model T, an American car manufactured by the Ford Motor Company, became one of the first ca"... 28 more characters, evidence: "One of the first cars affordable by the masses was the 1908 Model T, an American car manufactured by"... 24 more characters }] RAG based approach[​](#rag-based-approach "Direct link to RAG based approach") ------------------------------------------------------------------------------ Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks. caution It can be difficult to identify which chunks are relevant. For example, in the `car` article we’re using here, most of the article contains key development information. So by using **RAG**, we’ll likely be throwing out a lot of relevant information. We suggest experimenting with your use case and determining whether this approach works or not. Here’s a simple example that relies on an in-memory demo `MemoryVectorStore` vectorstore. import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";// Only load the first 10 docs for speed in this demo use-caseconst vectorstore = await MemoryVectorStore.fromDocuments( splitDocs.slice(0, 10), new OpenAIEmbeddings());// Only extract from top documentconst retriever = vectorstore.asRetriever({ k: 1 }); In this case the RAG extractor is only looking at the top document. import { RunnableSequence } from "@langchain/core/runnables";const ragExtractor = RunnableSequence.from([ { text: retriever.pipe((docs) => docs[0].pageContent), }, extractionChain,]); const results = await ragExtractor.invoke( "Key developments associated with cars"); results.key_developments; [ { year: 2020, description: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million km (1."... 33 more characters, evidence: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 millionkm (1.2"... 31 more characters }, { year: 2030, description: "All fossil fuel vehicles will be banned in Amsterdam from 2030.", evidence: "all fossil fuel vehicles will be banned in Amsterdam from 2030." }, { year: 2020, description: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year.", evidence: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year." }] Common issues[​](#common-issues "Direct link to Common issues") --------------------------------------------------------------- Different methods have their own pros and cons related to cost, speed, and accuracy. Watch out for these issues: * Chunking content means that the LLM can fail to extract information if the information is spread across multiple chunks. * Large chunk overlap may cause the same information to be extracted twice, so be prepared to de-duplicate! * LLMs can make up data. If looking for a single fact across a large text and using a brute force approach, you may end up getting more made up data. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to improve extraction quality using few-shot examples. Next, check out some of the other guides in this section, such as [some tips on how to improve extraction quality with examples](/v0.2/docs/how_to/extraction_examples). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use reference examples ](/v0.2/docs/how_to/extraction_examples)[ Next How to do extraction without using function calling ](/v0.2/docs/how_to/extraction_parse) * [Set up](#set-up) * [Define the schema](#define-the-schema) * [Brute force approach](#brute-force-approach) * [Merge results](#merge-results) * [RAG based approach](#rag-based-approach) * [Common issues](#common-issues) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/extraction_parse
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to do extraction without using function calling On this page How to do extraction without using function calling =================================================== Prerequisites This guide assumes familiarity with the following: * [Extraction](/v0.2/docs/tutorials/extraction) LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format without using function calling. This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well, though it lacks some of the guarantees provided by function calling or JSON mode. Here, we’ll use Claude which is great at following instructions! See [here for more about Anthropic models](/v0.2/docs/integrations/chat/anthropic). First, we’ll install the integration package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic zod zod-to-json-schema yarn add @langchain/anthropic zod zod-to-json-schema pnpm add @langchain/anthropic zod zod-to-json-schema import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,}); tip All the same considerations for extraction quality apply for parsing approach. This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance! Using StructuredOutputParser[​](#using-structuredoutputparser "Direct link to Using StructuredOutputParser") ------------------------------------------------------------------------------------------------------------ The following example uses the built-in [`StructuredOutputParser`](/v0.2/docs/how_to/output_parser_structured/) to parse the output of a chat model. We use the built-in prompt formatting instructions contained in the parser. import { z } from "zod";import { StructuredOutputParser } from "langchain/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const parser = StructuredOutputParser.fromZodSchema(personSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ], ["human", "{query}"],]);const partialedPrompt = await prompt.partial({ format_instructions: parser.getFormatInstructions(),}); Let’s take a look at what information is sent to the model const query = "Anna is 23 years old and she is 6 feet tall"; const promptValue = await partialedPrompt.invoke({ query });console.log(promptValue.toChatMessages()); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Anna is 23 years old and she is 6 feet tall", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Anna is 23 years old and she is 6 feet tall", name: undefined, additional_kwargs: {} }] const chain = partialedPrompt.pipe(model).pipe(parser);await chain.invoke({ query }); { name: "Anna", hair_color: "", height_in_meters: "1.83" } Custom Parsing[​](#custom-parsing "Direct link to Custom Parsing") ------------------------------------------------------------------ You can also create a custom prompt and parser with `LangChain` and `LCEL`. You can use a raw function to parse the output from the model. In the below example, we’ll pass the schema into the prompt as JSON schema. For convenience, we’ll declare our schema with Zod, then use the [`zod-to-json-schema`](https://github.com/StefanTerdell/zod-to-json-schema) utility to convert it to JSON schema. import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});const SYSTEM_PROMPT_TEMPLATE = [ "Answer the user's query. You must return your answer as JSON that matches the given schema:", "```json\n{schema}\n```.", "Make sure to wrap the answer in ```json and ``` tags. Conform to the given schema exactly.",].join("\n");const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], ["human", "{query}"],]);const extractJsonFromOutput = (message) => { const text = message.content; // Define the regular expression pattern to match JSON blocks const pattern = /```json\s*((.|\n)*?)\s*```/gs; // Find all non-overlapping matches of the pattern in the string const matches = pattern.exec(text); if (matches && matches[1]) { try { return JSON.parse(matches[1].trim()); } catch (error) { throw new Error(`Failed to parse: ${matches[1]}`); } } else { throw new Error(`No JSON found in: ${message}`); }}; const query = "Anna is 23 years old and she is 6 feet tall";const promptValue = await prompt.invoke({ schema: zodToJsonSchema(peopleSchema), query,});promptValue.toString(); "System: Answer the user's query. You must return your answer as JSON that matches the given schema:\n"... 170 more characters const chain = prompt.pipe(model).pipe(extractJsonFromOutput);await chain.invoke({ schema: zodToJsonSchema(peopleSchema), query,}); { name: "Anna", age: 23, height: { feet: 6, inches: 0 } } Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to perform extraction without using tool calling. Next, check out some of the other guides in this section, such as [some tips on how to improve extraction quality with examples](/v0.2/docs/how_to/extraction_examples). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle long text ](/v0.2/docs/how_to/extraction_long_text)[ Next Fallbacks ](/v0.2/docs/how_to/fallbacks) * [Using StructuredOutputParser](#using-structuredoutputparser) * [Custom Parsing](#custom-parsing) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/fallbacks
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Fallbacks On this page Fallbacks ========= Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) When working with language models, you may encounter issues from the underlying APIs, e.g. rate limits or downtime. As you move your LLM applications into production it becomes more and more important to have contingencies for errors. That's why we've introduced the concept of fallbacks. Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level. This is important because often times different models require different prompts. So if your call to OpenAI fails, you don't just want to send the same prompt to Anthropic - you probably want want to use e.g. a different prompt template. Handling LLM API errors[​](#handling-llm-api-errors "Direct link to Handling LLM API errors") --------------------------------------------------------------------------------------------- This is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit a rate limit, or any number of things. **IMPORTANT:** By default, many of LangChain's LLM wrappers catch errors and retry. You will most likely want to turn those off when working with fallbacks. Otherwise the first wrapper will keep on retrying rather than failing. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic @langchain/openai yarn add @langchain/anthropic @langchain/openai pnpm add @langchain/anthropic @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { ChatAnthropic } from "@langchain/anthropic";// Use a fake model name that will always throw an errorconst fakeOpenAIModel = new ChatOpenAI({ model: "potato!", maxRetries: 0,});const anthropicModel = new ChatAnthropic({});const modelWithFallback = fakeOpenAIModel.withFallbacks({ fallbacks: [anthropicModel],});const result = await modelWithFallback.invoke("What is your name?");console.log(result);/* AIMessage { content: ' My name is Claude. I was created by Anthropic.', additional_kwargs: {} }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Fallbacks for RunnableSequences[​](#fallbacks-for-runnablesequences "Direct link to Fallbacks for RunnableSequences") --------------------------------------------------------------------------------------------------------------------- We can also create fallbacks for sequences, that are sequences themselves. Here we do that with two different models: ChatOpenAI and then normal OpenAI (which does not use a chat model). Because OpenAI is NOT a chat model, you likely want a different prompt. import { ChatOpenAI, OpenAI } from "@langchain/openai";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate, PromptTemplate } from "@langchain/core/prompts";const chatPrompt = ChatPromptTemplate.fromMessages<{ animal: string }>([ [ "system", "You're a nice assistant who always includes a compliment in your response", ], ["human", "Why did the {animal} cross the road?"],]);// Use a fake model name that will always throw an errorconst fakeOpenAIChatModel = new ChatOpenAI({ model: "potato!", maxRetries: 0,});const prompt = PromptTemplate.fromTemplate(`Instructions: You should always include a compliment in your response.Question: Why did the {animal} cross the road?Answer:`);const openAILLM = new OpenAI({});const outputParser = new StringOutputParser();const badChain = chatPrompt.pipe(fakeOpenAIChatModel).pipe(outputParser);const goodChain = prompt.pipe(openAILLM).pipe(outputParser);const chain = badChain.withFallbacks({ fallbacks: [goodChain],});const result = await chain.invoke({ animal: "dragon",});console.log(result);/* I don't know, but I'm sure it was an impressive sight. You must have a great imagination to come up with such an interesting question!*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` Handling long inputs[​](#handling-long-inputs "Direct link to Handling long inputs") ------------------------------------------------------------------------------------ One of the big limiting factors of LLMs in their context window. Sometimes you can count and track the length of prompts before sending them to an LLM, but in situations where that is hard/complicated you can fallback to a model with longer context length. import { ChatOpenAI } from "@langchain/openai";// Use a model with a shorter context windowconst shorterLlm = new ChatOpenAI({ model: "gpt-3.5-turbo", maxRetries: 0,});const longerLlm = new ChatOpenAI({ model: "gpt-3.5-turbo-16k",});const modelWithFallback = shorterLlm.withFallbacks({ fallbacks: [longerLlm],});const input = `What is the next number: ${"one, two, ".repeat(3000)}`;try { await shorterLlm.invoke(input);} catch (e) { // Length error console.log(e);}const result = await modelWithFallback.invoke(input);console.log(result);/* AIMessage { content: 'The next number is one.', name: undefined, additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Fallback to a better model[​](#fallback-to-a-better-model "Direct link to Fallback to a better model") ------------------------------------------------------------------------------------------------------ Often times we ask models to output format in a specific format (like JSON). Models like GPT-3.5 can do this okay, but sometimes struggle. This naturally points to fallbacks - we can try with a faster and cheaper model, but then if parsing fails we can use GPT-4. import { z } from "zod";import { OpenAI, ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { StructuredOutputParser } from "@langchain/core/output_parsers";const prompt = PromptTemplate.fromTemplate( `Return a JSON object containing the following value wrapped in an "input" key. Do not return anything else:\n{input}`);const badModel = new OpenAI({ maxRetries: 0, model: "gpt-3.5-turbo-instruct",});const normalModel = new ChatOpenAI({ model: "gpt-4",});const outputParser = StructuredOutputParser.fromZodSchema( z.object({ input: z.string(), }));const badChain = prompt.pipe(badModel).pipe(outputParser);const goodChain = prompt.pipe(normalModel).pipe(outputParser);try { const result = await badChain.invoke({ input: "testing0", });} catch (e) { console.log(e); /* OutputParserException [Error]: Failed to parse. Text: " { "name" : " Testing0 ", "lastname" : " testing ", "fullname" : " testing ", "role" : " test ", "telephone" : "+1-555-555-555 ", "email" : " [email protected] ", "role" : " test ", "text" : " testing0 is different than testing ", "role" : " test ", "immediate_affected_version" : " 0.0.1 ", "immediate_version" : " 1.0.0 ", "leading_version" : " 1.0.0 ", "version" : " 1.0.0 ", "finger prick" : " no ", "finger prick" : " s ", "text" : " testing0 is different than testing ", "role" : " test ", "immediate_affected_version" : " 0.0.1 ", "immediate_version" : " 1.0.0 ", "leading_version" : " 1.0.0 ", "version" : " 1.0.0 ", "finger prick" :". Error: SyntaxError: Unexpected end of JSON input*/}const chain = badChain.withFallbacks({ fallbacks: [goodChain],});const result = await chain.invoke({ input: "testing",});console.log(result);/* { input: 'testing' }*/ #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [StructuredOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html) from `@langchain/core/output_parsers` * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to do extraction without using function calling ](/v0.2/docs/how_to/extraction_parse)[ Next Few Shot Prompt Templates ](/v0.2/docs/how_to/few_shot) * [Handling LLM API errors](#handling-llm-api-errors) * [Fallbacks for RunnableSequences](#fallbacks-for-runnablesequences) * [Handling long inputs](#handling-long-inputs) * [Fallback to a better model](#fallback-to-a-better-model)
null
https://js.langchain.com/v0.2/docs/how_to/trim_messages
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to trim messages On this page How to trim messages ==================== Prerequisites This guide assumes familiarity with the following concepts: * [Messages](/v0.2/docs/concepts/#messages) * [Chat models](/v0.2/docs/concepts/#chat-models) * [Chaining](/v0.2/docs/how_to/sequence/) * [Chat history](/v0.2/docs/concepts/#chat-history) The methods in this guide also require `@langchain/core>=0.2.8`. Please see here for a [guide on upgrading](/v0.2/docs/how_to/installation/#installing-integration-packages). All models have finite context windows, meaning there’s a limit to how many tokens they can take as input. If you have very long messages or a chain/agent that accumulates a long message is history, you’ll need to manage the length of the messages you’re passing in to the model. The `trimMessages` util provides some basic strategies for trimming a list of messages to be of a certain token length. Getting the last `maxTokens` tokens[​](#getting-the-last-maxtokens-tokens "Direct link to getting-the-last-maxtokens-tokens") ----------------------------------------------------------------------------------------------------------------------------- To get the last `maxTokens` in the list of Messages we can set `strategy: "last"`. Notice that for our `tokenCounter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you’re trimming your messages to fit into the context window of that specific model: import { AIMessage, HumanMessage, SystemMessage, trimMessages,} from "@langchain/core/messages";import { ChatOpenAI } from "@langchain/openai";const messages = [ new SystemMessage("you're a good assistant, you always respond with a joke."), new HumanMessage("i wonder why it's called langchain"), new AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), new HumanMessage("and who is harrison chasing anyways"), new AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), new HumanMessage("what do you call a speechless parrot"),];const trimmed = await trimMessages(messages, { maxTokens: 45, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }),});console.log( trimmed .map((x) => JSON.stringify( { role: x._getType(), content: x.content, }, null, 2 ) ) .join("\n\n")); { "role": "human", "content": "and who is harrison chasing anyways"}{ "role": "ai", "content": "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"}{ "role": "human", "content": "what do you call a speechless parrot"} If we want to always keep the initial system message we can specify `includeSystem: true`: await trimMessages(messages, { maxTokens: 45, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true,}); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant, you always respond with a joke.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant, you always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'what do you call a speechless parrot', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'what do you call a speechless parrot', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] If we want to allow splitting up the contents of a message we can specify `allowPartial: true`: await trimMessages(messages, { maxTokens: 50, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true, allowPartial: true,}); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant, you always respond with a joke.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant, you always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'what do you call a speechless parrot', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'what do you call a speechless parrot', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] If we need to make sure that our first message (excluding the system message) is always of a specific type, we can specify `startOn`: await trimMessages(messages, { maxTokens: 60, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true, startOn: "human",}); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant, you always respond with a joke.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant, you always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'and who is harrison chasing anyways', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'and who is harrison chasing anyways', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'what do you call a speechless parrot', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'what do you call a speechless parrot', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] Getting the first `maxTokens` tokens[​](#getting-the-first-maxtokens-tokens "Direct link to getting-the-first-maxtokens-tokens") -------------------------------------------------------------------------------------------------------------------------------- We can perform the flipped operation of getting the _first_ `maxTokens` by specifying `strategy: "first"`: await trimMessages(messages, { maxTokens: 45, strategy: "first", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }),}); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant, you always respond with a joke.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant, you always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "i wonder why it's called langchain", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "i wonder why it's called langchain", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] Writing a custom token counter[​](#writing-a-custom-token-counter "Direct link to Writing a custom token counter") ------------------------------------------------------------------------------------------------------------------ We can write a custom token counter function that takes in a list of messages and returns an int. import { encodingForModel } from "@langchain/core/utils/tiktoken";import { BaseMessage, HumanMessage, AIMessage, ToolMessage, SystemMessage, MessageContent, MessageContentText,} from "@langchain/core/messages";async function strTokenCounter( messageContent: MessageContent): Promise<number> { if (typeof messageContent === "string") { return (await encodingForModel("gpt-4")).encode(messageContent).length; } else { if (messageContent.every((x) => x.type === "text" && x.text)) { return (await encodingForModel("gpt-4")).encode( (messageContent as MessageContentText[]) .map(({ text }) => text) .join("") ).length; } throw new Error( `Unsupported message content ${JSON.stringify(messageContent)}` ); }}async function tiktokenCounter(messages: BaseMessage[]): Promise<number> { let numTokens = 3; // every reply is primed with <|start|>assistant<|message|> const tokensPerMessage = 3; const tokensPerName = 1; for (const msg of messages) { let role: string; if (msg instanceof HumanMessage) { role = "user"; } else if (msg instanceof AIMessage) { role = "assistant"; } else if (msg instanceof ToolMessage) { role = "tool"; } else if (msg instanceof SystemMessage) { role = "system"; } else { throw new Error(`Unsupported message type ${msg.constructor.name}`); } numTokens += tokensPerMessage + (await strTokenCounter(role)) + (await strTokenCounter(msg.content)); if (msg.name) { numTokens += tokensPerName + (await strTokenCounter(msg.name)); } } return numTokens;}await trimMessages(messages, { maxTokens: 45, strategy: "last", tokenCounter: tiktokenCounter,}); [ AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'what do you call a speechless parrot', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'what do you call a speechless parrot', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] Chaining[​](#chaining "Direct link to Chaining") ------------------------------------------------ `trimMessages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain import { ChatOpenAI } from "@langchain/openai";import { trimMessages } from "@langchain/core/messages";const llm = new ChatOpenAI({ model: "gpt-4o" });// Notice we don't pass in messages. This creates// a RunnableLambda that takes messages as inputconst trimmer = trimMessages({ maxTokens: 45, strategy: "last", tokenCounter: llm, includeSystem: true,});const chain = trimmer.pipe(llm);await chain.invoke(messages); AIMessage { lc_serializable: true, lc_kwargs: { content: 'Thanks! I do try to keep things light. But for a more serious answer, "LangChain" is likely named to reflect its focus on language processing and the way it connects different components or models together—essentially forming a "chain" of linguistic operations. The "Lang" part emphasizes its focus on language, while "Chain" highlights the interconnected workflows it aims to facilitate.', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Thanks! I do try to keep things light. But for a more serious answer, "LangChain" is likely named to reflect its focus on language processing and the way it connects different components or models together—essentially forming a "chain" of linguistic operations. The "Lang" part emphasizes its focus on language, while "Chain" highlights the interconnected workflows it aims to facilitate.', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 77, promptTokens: 59, totalTokens: 136 }, finish_reason: 'stop' }, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: { input_tokens: 59, output_tokens: 77, total_tokens: 136 }} Looking at [the LangSmith trace](https://smith.langchain.com/public/3793312c-a74b-4e77-92b4-f91b3d74ac5f/r) we can see that before the messages are passed to the model they are first trimmed. Looking at just the trimmer, we can see that it’s a Runnable object that can be invoked like all Runnables: await trimmer.invoke(messages); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant, you always respond with a joke.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant, you always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'Hmmm let me think.\n' + '\n' + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'what do you call a speechless parrot', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'what do you call a speechless parrot', name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }] Using with ChatMessageHistory[​](#using-with-chatmessagehistory "Direct link to Using with ChatMessageHistory") --------------------------------------------------------------------------------------------------------------- Trimming messages is especially useful when [working with chat histories](/v0.2/docs/how_to/message_history/), which can get arbitrarily long: import { InMemoryChatMessageHistory } from "@langchain/core/chat_history";import { RunnableWithMessageHistory } from "@langchain/core/runnables";import { HumanMessage, trimMessages } from "@langchain/core/messages";import { ChatOpenAI } from "@langchain/openai";const chatHistory = new InMemoryChatMessageHistory(messages.slice(0, -1));const dummyGetSessionHistory = async (sessionId: string) => { if (sessionId !== "1") { throw new Error("Session not found"); } return chatHistory;};const llm = new ChatOpenAI({ model: "gpt-4o" });const trimmer = trimMessages({ maxTokens: 45, strategy: "last", tokenCounter: llm, includeSystem: true,});const chain = trimmer.pipe(llm);const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: dummyGetSessionHistory,});await chainWithHistory.invoke( [new HumanMessage("what do you call a speechless parrot")], { configurable: { sessionId: "1" } }); AIMessage { lc_serializable: true, lc_kwargs: { content: 'A "polly-no-want-a-cracker"!', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'A "polly-no-want-a-cracker"!', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 11, promptTokens: 57, totalTokens: 68 }, finish_reason: 'stop' }, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: { input_tokens: 57, output_tokens: 11, total_tokens: 68 }} Looking at [the LangSmith trace](https://smith.langchain.com/public/cfc76880-5895-4852-b7d0-12916448bdb2/r) we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message. API reference[​](#api-reference "Direct link to API reference") --------------------------------------------------------------- For a complete description of all arguments head to the [API reference](https://api.js.langchain.com/functions/langchain_core_messages.trimMessages.html). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use LangChain tools ](/v0.2/docs/how_to/tools_builtin)[ Next How use a vector store to retrieve data ](/v0.2/docs/how_to/vectorstore_retriever) * [Getting the last `maxTokens` tokens](#getting-the-last-maxtokens-tokens) * [Getting the first `maxTokens` tokens](#getting-the-first-maxtokens-tokens) * [Writing a custom token counter](#writing-a-custom-token-counter) * [Chaining](#chaining) * [Using with ChatMessageHistory](#using-with-chatmessagehistory) * [API reference](#api-reference)
null
https://js.langchain.com/v0.2/docs/how_to/few_shot
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * Few Shot Prompt Templates On this page Few Shot Prompt Templates ========================= Few shot prompting is a prompting technique which provides the Large Language Model (LLM) with a list of examples, and then asks the LLM to generate some text following the lead of the examples provided. An example of this is the following: Say you want your LLM to respond in a specific format. You can few shot prompt the LLM with a list of question answer pairs so it knows what format to respond in. Respond to the users question in the with the following format:Question: What is your name?Answer: My name is John.Question: What is your age?Answer: I am 25 years old.Question: What is your favorite color?Answer: Here we left the last `Answer:` undefined so the LLM can fill it in. The LLM will then generate the following: Answer: I don't have a favorite color; I don't have preferences. ### Use Case[​](#use-case "Direct link to Use Case") In the following example we're few shotting the LLM to rephrase questions into more general queries. We provide two sets of examples with specific questions, and rephrased general questions. The `FewShotChatMessagePromptTemplate` will use our examples and when `.format` is called, we'll see those examples formatted into a string we can pass to the LLM. import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables}); const formattedPrompt = await fewShotPrompt.format({});console.log(formattedPrompt); [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }] Then, if we use this with another question, the LLM will rephrase the question how we want. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({});const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ prefix: "Rephrase the users query to be more general, using the following examples", suffix: "Human: {input}", examplePrompt, examples, inputVariables: ["input"],});const formattedPrompt = await fewShotPrompt.format({ input: "What's France's main city?",});const response = await model.invoke(formattedPrompt);console.log(response); AIMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'What is the capital of France?', additional_kwargs: { function_call: undefined }} ### Few Shotting With Functions[​](#few-shotting-with-functions "Direct link to Few Shotting With Functions") You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. const getCurrentDate = () => { return new Date().toISOString();};const prompt = new FewShotChatMessagePromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z ### Few Shot vs Chat Few Shot[​](#few-shot-vs-chat-few-shot "Direct link to Few Shot vs Chat Few Shot") The chat and non chat few shot prompt templates act in a similar way. The below example will demonstrate using chat and non chat, and the differences with their outputs. import { FewShotPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const prompt = `Human: {input}AI: {output}`;const examplePromptTemplate = PromptTemplate.fromTemplate(prompt);const exampleChatPromptTemplate = ChatPromptTemplate.fromTemplate(prompt);const chatFewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt: exampleChatPromptTemplate, examples, inputVariables: [], // no input variables});const fewShotPrompt = new FewShotPromptTemplate({ examplePrompt: examplePromptTemplate, examples, inputVariables: [], // no input variables}); console.log("Chat Few Shot: ", await chatFewShotPrompt.formatMessages({}));/**Chat Few Shot: [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }] */ console.log("Few Shot: ", await fewShotPrompt.formatPromptValue({}));/**Few Shot:Human: Could the members of The Police perform lawful arrests?AI: what can the members of The Police do?Human: Jan Sindel's was born in what country?AI: what is Jan Sindel's personal history? */ Here we can see the main distinctions between `FewShotChatMessagePromptTemplate` and `FewShotPromptTemplate`: input and output values. `FewShotChatMessagePromptTemplate` works by taking in a list of `ChatPromptTemplate` for examples, and its output is a list of instances of `BaseMessage`. On the other hand, `FewShotPromptTemplate` works by taking in a `PromptTemplate` for examples, and its output is a string. With Non Chat Models[​](#with-non-chat-models "Direct link to With Non Chat Models") ------------------------------------------------------------------------------------ LangChain also provides a class for few shot prompt formatting for non chat models: `FewShotPromptTemplate`. The API is largely the same, but the output is formatted differently (chat messages vs strings). ### Partials With Functions[​](#partials-with-functions "Direct link to Partials With Functions") import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examplePrompt = PromptTemplate.fromTemplate("{foo}{bar}");const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted); boobaz\n ### With Functions and Example Selector[​](#with-functions-and-example-selector "Direct link to With Functions and Example Selector") import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examplePrompt = PromptTemplate.fromTemplate("An example about {x}");const exampleSelector = await LengthBasedExampleSelector.fromExamples( [{ x: "foo" }, { x: "bar" }], { examplePrompt, maxLength: 200 });const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", exampleSelector, examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted); boobazAn example about fooAn example about bar * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Fallbacks ](/v0.2/docs/how_to/fallbacks)[ Next How to filter messages ](/v0.2/docs/how_to/filter_messages) * [Use Case](#use-case) * [Few Shotting With Functions](#few-shotting-with-functions) * [Few Shot vs Chat Few Shot](#few-shot-vs-chat-few-shot) * [With Non Chat Models](#with-non-chat-models) * [Partials With Functions](#partials-with-functions) * [With Functions and Example Selector](#with-functions-and-example-selector)
null
https://js.langchain.com/v0.2/docs/how_to/vectorstore_retriever
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How use a vector store to retrieve data On this page How use a vector store to retrieve data ======================================= Prerequisites This guide assumes familiarity with the following concepts: * [Vector stores](/v0.2/docs/concepts/#vectorstores) * [Retrievers](/v0.2/docs/concepts/#retrievers) * [Text splitters](/v0.2/docs/concepts#text-splitters) * [Chaining runnables](/v0.2/docs/how_to/sequence/) Vector stores can be converted into retrievers using the [`.asRetriever()`](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStore.html#asRetriever) method, which allows you to more easily compose them in chains. Below, we show a retrieval-augmented generation (RAG) chain that performs question answering over documents using the following steps: 1. Initialize an vector store 2. Create a retriever from that vector store 3. Compose a question answering chain 4. Ask questions! Each of the steps has multiple sub steps and potential configurations, but we'll go through one common flow. First, install the required dependency: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai You can download the `state_of_the_union.txt` file [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/state_of_the_union.txt). import * as fs from "node:fs";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import type { Document } from "@langchain/core/documents";const formatDocumentsAsString = (documents: Document[]) => { return documents.map((document) => document.pageContent).join("\n\n");};// Initialize the LLM to use to answer the question.const model = new ChatOpenAI({ model: "gpt-4o",});const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Initialize a retriever wrapper around the vector storeconst vectorStoreRetriever = vectorStore.asRetriever();// Create a system & human prompt for the chat modelconst SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.If you don't know the answer, just say that you don't know, don't try to make up an answer.----------------{context}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], ["human", "{question}"],]);const chain = RunnableSequence.from([ { context: vectorStoreRetriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);const answer = await chain.invoke( "What did the president say about Justice Breyer?");console.log({ answer });/* { answer: 'The president honored Justice Stephen Breyer by recognizing his dedication to serving the country as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. He thanked Justice Breyer for his service.' }*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Let's walk through what's happening here. 1. We first load a long text and split it into smaller documents using a text splitter. We then load those documents (which also embeds the documents using the passed `OpenAIEmbeddings` instance) into HNSWLib, our vector store, creating our index. 2. Though we can query the vector store directly, we convert the vector store into a retriever to return retrieved documents in the right format for the question answering chain. 3. We initialize a retrieval chain, which we'll call later in step 4. 4. We ask questions! Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to convert a vector store as a retriever. See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to trim messages ](/v0.2/docs/how_to/trim_messages)[ Next How to create and query vector stores ](/v0.2/docs/how_to/vectorstores) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/concepts
* [](/v0.2/) * Conceptual guide On this page Conceptual guide ================ This section contains introductions to key parts of LangChain. Architecture[​](#architecture "Direct link to Architecture") ------------------------------------------------------------ LangChain as a framework consists of several pieces. The below diagram shows how they relate. ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview") ### `@langchain/core`[​](#langchaincore "Direct link to langchaincore") This package contains base abstractions of different components and ways to compose them together. The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here. No third party integrations are defined here. The dependencies are kept purposefully very lightweight. ### `@langchain/community`[​](#langchaincommunity "Direct link to langchaincommunity") This package contains third party integrations that are maintained by the LangChain community. Key partner packages are separated out (see below). This contains all integrations for various components (LLMs, vectorstores, retrievers). All dependencies in this package are optional to keep the package as lightweight as possible. ### Partner packages[​](#partner-packages "Direct link to Partner packages") While the long tail of integrations are in `@langchain/community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations. ### `langchain`[​](#langchain "Direct link to langchain") The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. These are NOT third party integrations. All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations. ### [LangGraph.js](https://langchain-ai.github.io/langgraphjs/)[​](#langgraphjs "Direct link to langgraphjs") LangGraph.js is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. ### [LangSmith](https://docs.smith.langchain.com)[​](#langsmith "Direct link to langsmith") A developer platform that lets you debug, test, evaluate, and monitor LLM applications. Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ If you want to work with high level abstractions, you should install the `langchain` package. * npm * Yarn * pnpm npm i langchain yarn add langchain pnpm add langchain If you want to work with specific integrations, you will need to install them separately. See [here](/v0.2/docs/integrations/platforms/) for a list of integrations and how to install them. For working with LangSmith, you will need to set up a LangSmith developer account [here](https://smith.langchain.com) and get an API key. After that, you can enable it by setting environment variables: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=ls__... LangChain Expression Language[​](#langchain-expression-language "Direct link to LangChain Expression Language") --------------------------------------------------------------------------------------------------------------- LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: **First-class streaming support** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. **Optimized parallel execution** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency. **Retries and fallbacks** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost. **Access intermediate results** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](https://www.langchain.com/langserve/) server. **Input and output schemas** Input and output schemas give every LCEL chain schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe. [**Seamless LangSmith tracing**](https://docs.smith.langchain.com) As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com) for maximum observability and debuggability. ### Interface[​](#interface "Direct link to Interface") To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below. This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes: * [`stream`](#stream): stream back chunks of the response * [`invoke`](#invoke): call the chain on an input * [`batch`](#batch): call the chain on an array of inputs The **input type** and **output type** varies by component: Component Input Type Output Type Prompt Object PromptValue ChatModel Single string, list of chat messages or a PromptValue ChatMessage LLM Single string, list of chat messages or a PromptValue String OutputParser The output of an LLM or ChatModel Depends on the parser Retriever Single string List of Documents Tool Single string or object, depending on the tool Depends on the tool Components[​](#components "Direct link to Components") ------------------------------------------------------ LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix. ### Chat models[​](#chat-models "Direct link to Chat models") Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see below). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This gives them the same interface as LLMs (and simpler to use). When a string is passed in as input, it will be converted to a `HumanMessage` under the hood before being passed to the underlying model. LangChain does not host any Chat Models, rather we rely on third party integrations. We have some standardized parameters when constructing ChatModels: * `model`: the name of the model Chat Models also accept other parameters that are specific to that integration. For specifics on how to use chat models, see the [relevant how-to guides here](/v0.2/docs/how_to/#chat-models). ### LLMs[​](#llms "Direct link to LLMs") caution Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/v0.2/docs/concepts/#chat-models), even for non-chat use cases. You are probably looking for [the section above instead](/v0.2/docs/concepts/#chat-models). Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are [Chat Models](/v0.2/docs/concepts/#chat-models), see above). Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as [Chat Models](/v0.2/docs/concepts/#chat-models). When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model. LangChain does not host any LLMs, rather we rely on third party integrations. For specifics on how to use LLMs, see the [relevant how-to guides here](/v0.2/docs/how_to/#llms). ### Function/Tool Calling[​](#functiontool-calling "Direct link to Function/Tool Calling") info We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message. Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result. A tool call includes a name, arguments object, and an optional identifier. The arguments object is structured `{ argumentName: argumentValue }`. Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools). There are two main use cases for function/tool calling: * [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/) * [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/) ### Message types[​](#message-types "Direct link to Message types") Some language models take an array of messages as input and return a message. There are a few different types of messages. All messages have a `role`, `content`, and `response_metadata` property. The `role` describes WHO is saying the message. LangChain has different message classes for different roles. The `content` property describes the content of the message. This can be a few different things: * A string (most models deal this type of content) * A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location) #### HumanMessage[​](#humanmessage "Direct link to HumanMessage") This represents a message from the user. #### AIMessage[​](#aimessage "Direct link to AIMessage") This represents a message from the model. In addition to the `content` property, these messages also have: **`response_metadata`** The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored. **`tool_calls`** These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property. This property returns an array of objects. Each object has the following keys: * `name`: The name of the tool that should be called. * `args`: The arguments to that tool. * `id`: The id of that tool call. #### SystemMessage[​](#systemmessage "Direct link to SystemMessage") This represents a system message, which tells the model how to behave. Not every model provider supports this. #### FunctionMessage[​](#functionmessage "Direct link to FunctionMessage") This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result. #### ToolMessage[​](#toolmessage "Direct link to ToolMessage") This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result. ### Prompt templates[​](#prompt-templates "Direct link to Prompt templates") Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in. Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages. There are a few different types of prompt templates: #### String PromptTemplates[​](#string-prompttemplates "Direct link to String PromptTemplates") These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows: import { PromptTemplate } from "@langchain/core/prompts";const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");await promptTemplate.invoke({ topic: "cats" }); #### ChatPromptTemplates[​](#chatprompttemplates "Direct link to ChatPromptTemplates") These prompt templates are used to format an array of messages. These "templates" consist of an array of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows: import { ChatPromptTemplate } from "@langchain/core/prompts";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["user", "Tell me a joke about {topic}"],]);await promptTemplate.invoke({ topic: "cats" }); In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in. #### MessagesPlaceholder[​](#messagesplaceholder "Direct link to MessagesPlaceholder") This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { HumanMessage } from "@langchain/core/messages";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], new MessagesPlaceholder("msgs"),]);promptTemplate.invoke({ msgs: [new HumanMessage({ content: "hi!" })] }); This will produce an array of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting an array of messages be slotted into a particular spot. An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is: const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{msgs}"], // <-- This is the changed part]); For specifics on how to use prompt templates, see the [relevant how-to guides here](/v0.2/docs/how_to/#prompt-templates). ### Example Selectors[​](#example-selectors "Direct link to Example Selectors") One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts. For specifics on how to use example selectors, see the [relevant how-to guides here](/v0.2/docs/how_to/#example-selectors). ### Output parsers[​](#output-parsers "Direct link to Output parsers") note The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/v0.2/docs/concepts/#function-tool-calling). Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs. There are two main methods an output parser must implement: * "Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. * "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure. And then one optional one: * "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type. LangChain has many different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information: **Name**: The name of the output parser **Supports Streaming**: Whether the output parser supports streaming. **Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific arguments. **Output Type**: The output type of the object returned by the parser. **Description**: Our commentary on this output parser and when to use it. Name Supports Streaming Input Type Output Type Description [JSON](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) ✅ `string` | `BaseMessage` `Promise<T>` Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model. [XML](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html) ✅ `string` | `BaseMessage` `Promise<XMLResult>` Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). [CSV](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html) ✅ `string` | `BaseMessage` `Array[string]` Returns an array of comma separated values. [Structured](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html) `string` | `BaseMessage` `Promise<TypeOf<T>>` Parse structured JSON from an LLM response. [HTTP](https://v02.api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) ✅ `string` `Promise<Uint8Array>` Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client. [Bytes](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html) ✅ `string` | `BaseMessage` `Promise<Uint8Array>` Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client. [Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html) `string` `Promise<Date>` Parses response into a `Date`. [Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html) `string` `Promise<Record<string, string>>` Parses the given text using the regex pattern and returns a object with the parsed output. For specifics on how to use output parsers, see the [relevant how-to guides here](/v0.2/docs/how_to/#output-parsers). ### Chat History[​](#chat-history "Direct link to Chat History") Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly. The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database Future interactions will then load those messages and pass them into the chain as part of the input. ### Document[​](#document "Direct link to Document") A Document object in LangChain contains information about some data. It has two attributes: * `pageContent: string`: The content of this document. Currently is only a string. * `metadata: Record<string, any>`: Arbitrary metadata associated with this document. Can track the document id, file name, etc. ### Document loaders[​](#document-loaders "Direct link to Document loaders") These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc. Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. An example use case is as follows: import { CSVLoader } from "@langchain/community/document_loaders/fs/csv";const loader = new CSVLoader();// <-- Integration specific parameters hereconst docs = await loader.load(); For specifics on how to use document loaders, see the [relevant how-to guides here](/v0.2/docs/how_to/#document-loaders). ### Text splitters[​](#text-splitters "Direct link to Text splitters") Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: 1. Split the text up into small, semantically meaningful chunks (often sentences). 2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). 3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there are two different axes along which you can customize your text splitter: 1. How the text is split 2. How the chunk size is measured For specifics on how to use text splitters, see the [relevant how-to guides here](/v0.2/docs/how_to/#text-splitters). ### Embedding models[​](#embedding-models "Direct link to Embedding models") Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text. By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning. These natural language search capabilities underpin many types of [context retrieval](/v0.2/docs/concepts/#retrieval), where we provide an LLM with the relevant data it needs to effectively respond to a query. ![](/v0.2/assets/images/embeddings-9c2616450a3b4f497a2d95a696b5f1a7.png) The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them. The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). For specifics on how to use embedding models, see the [relevant how-to guides here](/v0.2/docs/how_to/#embedding-models). ### Vectorstores[​](#vectorstores "Direct link to Vectorstores") One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before similarity search, allowing you more control over returned documents. Vectorstores can be converted to the retriever interface by doing: const vectorstore = new MyVectorStore();const retriever = vectorstore.asRetriever(); For specifics on how to use vector stores, see the [relevant how-to guides here](/v0.2/docs/how_to/#vectorstores). ### Retrievers[​](#retrievers "Direct link to Retrievers") A retriever is an interface that returns relevant documents given an unstructured query. They are more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Exa search](/v0.2/docs/integrations/retrievers/exa/) (web search) and [Amazon Kendra](/v0.2/docs/integrations/retrievers/kendra-retriever/). Retrievers accept a string query as input and return an array of `Document`s as output. For specifics on how to use retrievers, see the [relevant how-to guides here](/v0.2/docs/how_to/#retrievers). ### Tools[​](#tools "Direct link to Tools") Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things: 1. The name of the tool 2. A description of what the tool is 3. JSON schema of what the inputs to the tool are 4. The function to call 5. Whether the result of a tool should be returned directly to the user It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action. The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool. For specifics on how to use tools, see the [relevant how-to guides here](/v0.2/docs/how_to/#tools). ### Toolkits[​](#toolkits "Direct link to Toolkits") Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. All Toolkits expose a `getTools` method which returns an array of tools. You can therefore do: // Initialize a toolkitconst toolkit = new ExampleTookit(...)// Get list of toolsconst tools = toolkit.getTools() ### Agents[​](#agents "Direct link to Agents") By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. [LangGraph](https://github.com/langchain-ai/langgraphjs) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Please check out that [documentation](https://langchain-ai.github.io/langgraphjs/) for a more in depth overview of agent concepts. There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. AgentExecutor was essentially a runtime for agents. It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. In order to solve that we built LangGraph to be this flexible, highly-controllable runtime. If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/v0.2/docs/how_to/agent_executor). It is recommended, however, that you start to transition to [LangGraph](https://github.com/langchain-ai/langgraphjs). In order to assist in this we have put together a [transition guide on how to do so](/v0.2/docs/how_to/migrate_agent). ### Multimodal[​](#multimodal "Direct link to Multimodal") Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures. In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. For specifics on how to use multimodal models, see the [relevant how-to guides here](/v0.2/docs/how_to/#multimodal). ### Callbacks[​](#callbacks "Direct link to Callbacks") LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. #### Callback Events[​](#callback-events "Direct link to Callback Events") Event Event Trigger Associated Method Chat model start When a chat model starts `handleChatModelStart` LLM start When a llm starts `handleLLMStart` LLM new token When an llm OR chat model emits a new token `handleLLMNewToken` LLM ends When an llm OR chat model ends `handleLLMEnd` LLM errors When an llm OR chat model errors `handleLLMError` Chain start When a chain starts running `handleChainStart` Chain end When a chain ends `handleChainEnd` Chain error When a chain errors `handleChainError` Tool start When a tool starts running `handleToolStart` Tool end When a tool ends `handleToolEnd` Tool error When a tool errors `handleToolError` Agent action When an agent takes an action `handleAgentAction` Agent finish When an agent ends `handleAgentEnd` Retriever start When a retriever starts `handleRetrieverStart` Retriever end When a retriever ends `handleRetrieverEnd` Retriever error When a retriever errors `handleRetrieverError` Text When arbitrary text is run `handleText` #### Callback handlers[​](#callback-handlers "Direct link to Callback handlers") `CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html) interface, which has a method for each event that can be subscribed to. The `CallbackManager` will call the appropriate method on each handler when the event is triggered. #### Passing callbacks[​](#passing-callbacks "Direct link to Passing callbacks") The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: * **Request callbacks**: Passed at the time of the request in addition to the input data. Available on all standard `Runnable` objects. These callbacks are INHERITED by all children of the object they are defined on. For example, `chain.invoke({foo: "bar"}, {callbacks: [handler]})`. * **Constructor callbacks**: defined in the constructor, e.g. `new ChatAnthropic({ callbacks: [handler], tags: ["a-tag"] })`. In this case, the callbacks will be used for all calls made on that object, and will be scoped to that object only. For example, if you initialize a chat model with constructor callbacks, then use it within a chain, the callbacks will only be invoked for calls to that model. danger Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. If you're creating a custom chain or runnable, you need to remember to propagate request time callbacks to any child objects. For specifics on how to use callbacks, see the [relevant how-to guides here](/v0.2/docs/how_to/#callbacks). Techniques[​](#techniques "Direct link to Techniques") ------------------------------------------------------ ### Streaming[​](#streaming "Direct link to Streaming") Individual LLM calls often run for much longer than traditional resource requests. This compounds when you build more complex chains or agents that require multiple reasoning steps. Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming. Below, we'll discuss some concepts and considerations around streaming in LangChain. #### `.stream()`[​](#stream "Direct link to stream") Most modules in LangChain include the `.stream()` method as an ergonomic streaming interface. `.stream()` returns an iterator, which you can consume with a [`for await...of`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of) loop. Here's an example with a chat model: import { ChatAnthropic } from "@langchain/anthropic";import { concat } from "@langchain/core/utils/stream";import type { AIMessageChunk } from "@langchain/core/messages";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229" });const stream = await model.stream("what color is the sky?");let gathered: AIMessageChunk | undefined = undefined;for await (const chunk of stream) { console.log(chunk); if (gathered === undefined) { gathered = chunk; } else { gathered = concat(gathered, chunk); }}console.log(gathered); For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode without the need to provide additional config. The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html). Because this method is part of [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language), you can handle formatting differences from different outputs using an [output parser](/v0.2/docs/concepts/#output-parsers) to transform each yielded chunk. You can check out [this guide](/v0.2/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`. #### `.streamEvents()`[​](#streamevents "Direct link to streamevents") While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output - for example, returning sources alongside the final generation when building a chat over documents app. There are ways to do this [using callbacks](/v0.2/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate values to the end with something like chained [`.assign()`](/v0.2/docs/how_to/passthrough/) calls, but LangChain also includes an `.streamEvents()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator which yields [various types of events](/v0.2/docs/how_to/streaming/#event-reference) that you can filter and process according to the needs of your project. Here's one small example that prints just events containing streamed chat model output: import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229" });const prompt = ChatPromptTemplate.fromTemplate("tell me a joke about {topic}");const parser = new StringOutputParser();const chain = prompt.pipe(model).pipe(parser);const eventStream = await chain.streamEvents( { topic: "parrot" }, { version: "v2" });for await (const event of eventStream) { const kind = event.event; if (kind === "on_chat_model_stream") { console.log(event); }} You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components! See [this guide](/v0.2/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.streamEvents()`. #### Tokens[​](#tokens "Direct link to Tokens") The unit that most model providers use to measure input and output is via a unit called a **token**. Tokens are the basic units that language models read and generate when processing or producing text. The exact definition of a token can vary depending on the specific way the model was trained - for instance, in English, a token could be a single word like "apple", or a part of a word like "app". When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**. The model then streams back generated output tokens, which the tokenizer decodes into human-readable text. The below example shows how OpenAI models tokenize `LangChain is cool!`: ![](/v0.2/assets/images/tokenization-10f566ab6774724e63dd99646f69655c.png) You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries. The reason language models use tokens rather than something more immediately intuitive like "characters" has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on the initial input and their previous generations. Training the model using tokens language models to handle linguistic units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model to learn and understand the structure of the language, including grammar and context. Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing. #### Callbacks[​](#callbacks-1 "Direct link to Callbacks") The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/v0.2/docs/concepts/#callbacks) system. You can pass a callback handler that handles the [`handleLLMNewToken`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMNewToken) event into LangChain components. When that component is invoked, any [LLM](/v0.2/docs/concepts/#llms) or [chat model](/v0.2/docs/concepts/#chat-models) contained in the component calls the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response. You can also handle the [`handleLLMEnd`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMEnd) event to perform any necessary cleanup. You can see [this how-to section](/v0.2/docs/how_to/#callbacks) for more specifics on using callbacks. Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable, they can be unwieldy for developers. For example: * You need to explicitly initialize and manage some aggregator or other stream to collect results. * The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes. * Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once. * You would often ignore the result of the actual model call in favor of callback results. ### Structured output[​](#structured-output "Direct link to Structured output") LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide range of inputs, but for some use-cases, it can be useful to constrain the LLM's output to a specific format or structure. This is referred to as **structured output**. For example, if the output is to be stored in a relational database, it is much easier if the model generates output that adheres to a defined schema or format. [Extracting specific information](/v0.2/docs/tutorials/extraction/) from unstructured text is another case where this is particularly useful. Most commonly, the output format will be JSON, though other formats such as [XML](/v0.2/docs/how_to/output_parser_xml/) can be useful too. Below, we'll discuss a few ways to get structured output from models in LangChain. #### `.withStructuredOutput()`[​](#withstructuredoutput "Direct link to withstructuredoutput") For convenience, some LangChain chat models support a `.withStructuredOutput()` method. This method only requires a schema as input, and returns an object matching the requested schema. Generally, this method is only present on models that support one of the more advanced methods described below, and will use one of them under the hood. It takes care of importing a suitable output parser and formatting the schema in the right format for the model. For more information, check out this [how-to guide](/v0.2/docs/how_to/structured_output/#the-.withstructuredoutput-method). #### Raw prompting[​](#raw-prompting "Direct link to Raw prompting") The most intuitive way to get a model to structure output is to ask nicely. In addition to your query, you can give instructions describing what kind of output you'd like, then parse the output using an [output parser](/v0.2/docs/concepts/#output-parsers) to convert the raw model message or string output into something more easily manipulated. The biggest benefit to raw prompting is its flexibility: * Raw prompting does not require any special model features, only sufficient reasoning capability to understand the passed schema. * You can prompt for any format you'd like, not just JSON. This can be useful if the model you are using is more heavily trained on a certain type of data, such as XML or YAML. However, there are some drawbacks too: * LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format for smooth parsing can be surprisingly difficult and model-specific. * Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult. Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions, and still others may prefer XML. While we'll next go over some ways that you can take advantage of features offered by model providers to increase reliability, prompting techniques remain important for tuning your results no matter what method you choose. #### JSON mode[​](#json-mode "Direct link to JSON mode") Some models, such as [Mistral](/v0.2/docs/integrations/chat/mistral/), [OpenAI](/v0.2/docs/integrations/chat/openai/), [Together AI](/v0.2/docs/integrations/chat/togetherai/) and [Ollama](/v0.2/docs/integrations/chat/ollama/), support a feature called **JSON mode**, usually enabled via config. When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON. Often they require some custom prompting, but it's usually much less burdensome and along the lines of, `"you must always return JSON"`, and the [output is easier to parse](/v0.2/docs/how_to/output_parser_json/). It's also generally simpler and more commonly available than tool calling. Here's an example: import { JsonOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o", modelKwargs: { response_format: { type: "json_object" }, },});const TEMPLATE = `Answer the user's question to the best of your ability.You must always output a JSON object with an "answer" key and a "followup_question" key.{question}`;const prompt = ChatPromptTemplate.fromTemplate(TEMPLATE);const chain = prompt.pipe(model).pipe(new JsonOutputParser());await chain.invoke({ question: "What is the powerhouse of the cell?" }); { answer: "The powerhouse of the cell is the mitochondrion.", followup_question: "Would you like to learn more about the functions of mitochondria?"} For a full list of model providers that support JSON mode, see [this table](/v0.2/docs/integrations/chat/). #### Function/tool calling[​](#functiontool-calling-1 "Direct link to Function/tool calling") info We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message. Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result. For models that support it, tool calling can be very convenient. It removes the guesswork around how best to prompt schemas in favor of a built-in model feature. It can also more naturally support agentic flows, since you can just pass multiple tool schemas instead of fiddling with enums or unions. Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools). LangChain provides a standardized interface for tool calling that is consistent across different models. The standard interface consists of: * `ChatModel.bindTools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/v0.2/docs/concepts/#tools). * `AIMessage.toolCalls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model. The following how-to guides are good practical resources for using function/tool calling: * [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/) * [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/) For a full list of model providers that support tool calling, [see this table](/v0.2/docs/integrations/chat/). ### Retrieval[​](#retrieval "Direct link to Retrieval") LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise). Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information. tip * See our RAG from Scratch [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared). The code examples are in Python but is useful for a general overview of RAG concepts for visual learners. * For a high-level guide on retrieval, see this [tutorial on RAG](/v0.2/docs/tutorials/rag/). RAG is only as good as the retrieved documents’ relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections. You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app. ![](/v0.2/assets/images/rag_landscape-627f1d0fd46b92bc2db0af8f99ec3724.png) #### Query Translation[​](#query-translation "Direct link to Query Translation") First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries. **Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system. For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query. Name When to use Description [Multi-query](/v0.2/docs/how_to/multiple_queries/) When you need to cover multiple perspectives of a question. Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. [Decomposition (Python cookbook)](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) When a question can be broken down into smaller subproblems. Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). [Step-back (Python cookbook)](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) When a higher-level conceptual understanding is required. First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. [HyDE (Python cookbook)](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) If you have challenges retrieving relevant documents using the raw user inputs. Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. tip See our Python RAG from Scratch videos for a few different specific approaches: * [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared) * [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared) * [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared) * [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared) #### Routing[​](#routing "Direct link to Routing") Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.** Name When to use Description [Logical routing](/v0.2/docs/how_to/routing/) When you can prompt an LLM with rules to decide where to route the input. Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. [Semantic routing](/v0.2/docs/how_to/routing/#routing-by-semantic-similarity) When semantic similarity is an effective way to determine where to route the input. Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. tip See our Python RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared). #### Query Construction[​](#query-construction "Direct link to Query Construction") Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.** In particular, [text-to-SQL](/v0.2/docs/tutorials/sql_qa/), [text-to-Cypher](/v0.2/docs/tutorials/graph/), and [query analysis for metadata filters](/v0.2/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively. Name When to Use Description [Text to SQL](/v0.2/docs/tutorials/sql_qa/) If users are asking questions that require information housed in a relational database, accessible via SQL. This uses an LLM to transform user input into a SQL query. [Text-to-Cypher](/v0.2/docs/tutorials/graph/) If users are asking questions that require information housed in a graph database, accessible via Cypher. This uses an LLM to transform user input into a Cypher query. [Self Query](/v0.2/docs/how_to/self_query/) If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). tip See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries. #### Indexing[​](#indexing "Direct link to Indexing") Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/v0.2/docs/concepts/#embedding-models). Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens. Two approaches can address this tension: (1) [Multi Vector](/v0.2/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/v0.2/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation. Name Index Type Uses an LLM When to Use Description [Vector store](/v0.2/docs/how_to/vectorstore_retriever/) Vector store No If you are just getting started and looking for something quick and easy. This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. [ParentDocument](/v0.2/docs/how_to/parent_document_retriever/) Vector store + Document Store No If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). [Multi Vector](/v0.2/docs/how_to/multi_vector/) Vector store + Document Store Sometimes during indexing If you are able to extract information from documents that you think is more relevant to index than the text itself. This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. [Time-Weighted Vector store](/v0.2/docs/how_to/time_weighted_vectorstore/) Vector store No If you have timestamps associated with your documents, and you want to retrieve the most recent ones This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) tip * See our Python RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared) * See our Python RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared) Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding. There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/v0.2/docs/docs/integrations/retrievers/supabase-hybrid/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://api.js.langchain.com/interfaces/langchain_core_vectorstores.VectorStoreInterface.html#maxMarginalRelevanceSearch), which attempts to diversify the results of a search to avoid returning similar and redundant documents. Name When to use Description [Hybrid search](/v0.2/docs/integrations/retrievers/supabase-hybrid/) When combining keyword-based and semantic similarity. Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Maximal Marginal Relevance (MMR)](/v0.2/docs/integrations/vectorstores/mongodb_atlas/#maximal-marginal-relevance) When needing to diversify search results. MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. #### Post-processing[​](#post-processing "Direct link to Post-processing") Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/v0.2/docs/how_to/ensemble_retriever), since it can can down-rank less relevant documents and / or [compress similar documents](/v0.2/docs/how_to/contextual_compression/#more-built-in-compressors-filters). Name Index Type Uses an LLM When to Use Description [Contextual Compression](/v0.2/docs/how_to/contextual_compression/) Any Sometimes If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. [Ensemble](/v0.2/docs/how_to/ensemble_retriever/) Any No If you have multiple retrieval methods and want to try combining them. This fetches documents from multiple retrievers and then combines them. [Re-ranking](/v0.2/docs/integrations/document_compressors/cohere_rerank/) Any Yes If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods. Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. tip See our Python RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1). #### Generation[​](#generation "Direct link to Generation") **Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow. We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraphjs/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction): * **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above * **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query * **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question ![](/v0.2/assets/images/langgraph_rag-f039b41ef268bf46783706e58726fd9c.png) Name When to use Description Self-RAG When needing to fix answers with hallucinations or irrelevant content. Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. Corrective-RAG When needing a fallback mechanism for low relevance docs. Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. tip See several videos and cookbooks showcasing RAG with LangGraph: * [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck) * [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts) * [Cookbooks for RAG using LangGraph.js](https://github.com/langchain-ai/langgraphjs/tree/main/examples/rag) ### Text splitting[​](#text-splitting "Direct link to Text splitting") LangChain offers many different types of `text splitters`. These are available in the main `langchain` package, but can be used separately in the [`@langchain/textsplitters`](https://www.npmjs.com/package/@langchain/textsplitters) package. Table columns: * **Name**: Name of the text splitter * **Classes**: Classes that implement this text splitter * **Splits On**: How this text splitter splits text * **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from. * **Description**: Description of the splitter, including recommendation on when to use it. Name Classes Splits On Adds Metadata Description Recursive [RecursiveCharacterTextSplitter](/v0.2/docs/how_to/recursive_text_splitter/) A list of user defined characters Recursively splits text. This splitting is trying to keep related pieces of text next to each other. This is the `recommended way` to start splitting text. Code [many languages](/v0.2/docs/how_to/code_splitter/) Code (Python, JS) specific characters Splits text based on characters specific to coding languages. 15 different languages are available to choose from. Token [many classes](/v0.2/docs/how_to/split_by_token/) Tokens Splits text on tokens. There exist a few different ways to measure tokens. Character [CharacterTextSplitter](/v0.2/docs/how_to/character_text_splitter/) A user defined character Splits text based on a user defined character. One of the simpler methods. ### Evaluation[​](#evaluation "Direct link to Evaluation") Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications. It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications. ![](/v0.2/assets/images/langsmith_evaluate-7d48643f3e4c50d77234e13feb95144d.png) [LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways: * It makes it easier to create and curate datasets via its tracing and annotation features * It provides an evaluation framework that helps you define metrics and run your app against your dataset * It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation). ### Generative UI[​](#generative-ui "Direct link to Generative UI") LangChain.js provides a few templates and examples showing off generative UI, and other ways of streaming data from the server to the client, specifically in React/Next.js. You can find the template for generative UI in the official [LangChain.js Next.js template](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/generative_ui/README.md). For streaming agentic responses and intermediate steps, you can find the [template and documentation here](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/README.md). And finally, streaming tool calls and structured output can be found [here](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/README.md). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create and query vector stores ](/v0.2/docs/how_to/vectorstores)[ Next Overview ](/v0.2/docs/versions/overview) * [Architecture](#architecture) * [`@langchain/core`](#langchaincore) * [`@langchain/community`](#langchaincommunity) * [Partner packages](#partner-packages) * [`langchain`](#langchain) * [LangGraph.js](#langgraphjs) * [LangSmith](#langsmith) * [Installation](#installation) * [LangChain Expression Language](#langchain-expression-language) * [Interface](#interface) * [Components](#components) * [Chat models](#chat-models) * [LLMs](#llms) * [Function/Tool Calling](#functiontool-calling) * [Message types](#message-types) * [Prompt templates](#prompt-templates) * [Example Selectors](#example-selectors) * [Output parsers](#output-parsers) * [Chat History](#chat-history) * [Document](#document) * [Document loaders](#document-loaders) * [Text splitters](#text-splitters) * [Embedding models](#embedding-models) * [Vectorstores](#vectorstores) * [Retrievers](#retrievers) * [Tools](#tools) * [Toolkits](#toolkits) * [Agents](#agents) * [Multimodal](#multimodal) * [Callbacks](#callbacks) * [Techniques](#techniques) * [Streaming](#streaming) * [Structured output](#structured-output) * [Retrieval](#retrieval) * [Text splitting](#text-splitting) * [Evaluation](#evaluation) * [Generative UI](#generative-ui)
null
https://js.langchain.com/v0.2/docs/how_to/vectorstores
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create and query vector stores On this page How to create and query vector stores ===================================== info Head to [Integrations](/v0.2/docs/integrations/vectorstores) for documentation on built-in integrations with vectorstore providers. Prerequisites This guide assumes familiarity with the following concepts: * [Vector stores](/v0.2/docs/concepts/#vectorstores) * [Embeddings](/v0.2/docs/concepts/#embedding-models) * [Document loaders](/v0.2/docs/concepts#document-loaders) One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. This walkthrough uses a basic, unoptimized implementation called [`MemoryVectorStore`](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. LangChain contains many built-in integrations - see [this section](/v0.2/docs/how_to/vectorstores/#which-one-to-pick) for more, or the [full list of integrations](/v0.2/docs/integrations/vectorstores/). Creating a new index[​](#creating-a-new-index "Direct link to Creating a new index") ------------------------------------------------------------------------------------ Most of the time, you'll need to load and prepare the data you want to search over. Here's an example that loads a recent speech from a file: import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/ #### API Reference: * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` Most of the time, you'll need to split the loaded text as a preparation step. See [this section](/v0.2/docs/concepts/#text-splitters) to learn more about text splitters. Creating a new index from texts[​](#creating-a-new-index-from-texts "Direct link to Creating a new index from texts") --------------------------------------------------------------------------------------------------------------------- If you have already prepared the data you want to search over, you can initialize a vector store directly from text chunks: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/ #### API Reference: * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` Which one to pick?[​](#which-one-to-pick "Direct link to Which one to pick?") ----------------------------------------------------------------------------- Here's a quick guide to help you pick the right vector store for your use case: * If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib), [Faiss](/v0.2/docs/integrations/vectorstores/faiss), [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * If you're looking for something that can run in-memory in browser-like environments, then go for [MemoryVectorStore](/v0.2/docs/integrations/vectorstores/memory) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * If you come from Python and you were looking for something similar to FAISS, try [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) or [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * If you're looking for an open-source full-featured vector database that you can run locally in a docker container, then go for [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * If you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for [Zep](/v0.2/docs/integrations/vectorstores/zep) * If you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate). * If you're using Supabase already then look at the [Supabase](/v0.2/docs/integrations/vectorstores/supabase) vector store to use the same Postgres database for your embeddings too * If you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * If you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) vector store. * If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) vector store. * If you're in search of a cost-effective vector database that allows run vector search with SQL, look no further than [MyScale](/v0.2/docs/integrations/vectorstores/myscale). * If you're in search of a vector database that you can load from both the browser and server side, check out [CloseVector](/v0.2/docs/integrations/vectorstores/closevector). It's a vector database that aims to be cross-platform. * If you're looking for a scalable, open-source columnar database with excellent performance for analytical queries, then consider [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse). Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to load data into a vectorstore. Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How use a vector store to retrieve data ](/v0.2/docs/how_to/vectorstore_retriever)[ Next Conceptual guide ](/v0.2/docs/concepts) * [Creating a new index](#creating-a-new-index) * [Creating a new index from texts](#creating-a-new-index-from-texts) * [Which one to pick?](#which-one-to-pick) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/versions/overview
* [](/v0.2/) * Versions * Overview On this page LangChain Over Time =================== Due to the rapidly evolving field, LangChain has also evolved rapidly. This document serves to outline at a high level what has changed and why. 0.1[​](#01 "Direct link to 0.1") -------------------------------- The 0.1 release marked a few key changes for LangChain. By this point, the LangChain ecosystem had become large both in the breadth of what it enabled as well as the community behind it. **Split of packages** LangChain was split up into several packages to increase modularity and decrease bloat. First, `@langchain/core` is created as a lightweight core library containing the base abstractions, some core implementations of those abstractions, and the generic runtime for creating chains. Next, all third party integrations are split into `@langchain/community` or their own individual partner packages. Higher level chains and agents remain in `langchain`. **`Runnables`** Having a specific class for each chain was proving not very scalable or flexible. Although these classes were left alone (without deprecation warnings) for this release, in the documentation much more space was given to generic runnables. < 0.1[​](#-01 "Direct link to < 0.1") ------------------------------------- There are several key characteristics of LangChain pre-0.1. **Singular Package** LangChain was largely a singular package. This meant that ALL integrations lived inside `langchain`. **Chains as classes** Most high level chains were largely their own classes. There was a base `Chain` class from which all chains inherited. This meant that in order to chain the logic inside a chain you basically had to modify the source code. There were a few chains that were meant to be more generic (`SequentialChain`, `RouterChain`) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Conceptual guide ](/v0.2/docs/concepts)[ Next v0.2 ](/v0.2/docs/versions/v0_2/) * [0.1](#01) * [< 0.1](#-01)
null
https://js.langchain.com/v0.2/docs/versions/v0_2/
* [](/v0.2/) * Versions * v0.2 On this page LangChain v0.2 ============== LangChain v0.2 was released in May 2024. This release includes a number of breaking changes and deprecations. This document contains a guide on upgrading to 0.2.x, as well as a list of deprecations and breaking changes. Reference * [Migrating to Astream Events v2](/v0.2/docs/versions/v0_2/migrating_astream_events) Migration[​](#migration "Direct link to Migration") --------------------------------------------------- This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps: 1. install the 0.2.x versions of `@langchain/core`, langchain and upgrade to recent versions of other packages that you may be using (e.g. `@langchain/langgraph`, `@langchain/community`, `@langchain/openai`, etc.) 2. Verify that your code runs properly with the new packages (e.g., unit tests pass) 3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.) 4. Manually resolve any remaining deprecation warnings 5. Re-run unit tests ### Upgrade to new imports[​](#upgrade-to-new-imports "Direct link to Upgrade to new imports") We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly. The migration script has the following limitations: 1. It's limited to helping users move from old imports to new imports. It doesn't help address other deprecations. 2. It can't handle imports that involve `as` . 3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body). 4. It will likely miss some deprecated imports. Here is an example of the import changes that the migration script can help apply automatically: From Package To Package Deprecated Import New Import `langchain` `@langchain/community` `import { UpstashVectorStore } from "langchain/vectorstores/upstash"` `import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash"` `@langchain/community` `@langchain/openai` `import { ChatOpenAI } from "@langchain/community/chat_models/openai"` `import { ChatOpenAI } from "@langchain/openai"` `langchain` `@langchain/core` `import { Document } from "langchain/schema/document"` `import { Document } from "@langchain/core/documents"` `langchain` `@langchain/textsplitters` `import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"` `import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"` #### Deprecation timeline[​](#deprecation-timeline "Direct link to Deprecation timeline") We have two main types of deprecations: 1. Code that was moved from `langchain` into another package (e.g, `@langchain/community`) If you try to import it from `langchain`, it will fail since the entrypoint has been removed. 2. Code that has better alternatives available and will eventually be removed, so there's only a single way to do things. (e.g., `predictMessages` method in ChatModels has been deprecated in favor of `invoke`). Many of these were marked for removal in 0.2. We have bumped the removal to 0.3. #### Installation[​](#installation "Direct link to Installation") note The 0.2.X migration script is only available in version `0.0.14-rc.1` or later. * npm * Yarn * pnpm npm i @langchain/[email protected] yarn add @langchain/[email protected] pnpm add @langchain/[email protected] #### Usage[​](#usage "Direct link to Usage") Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). For example, say your code still uses `import ChatOpenAI from "@langchain/community/chat_models/openai";`: Invoking the migration script will replace this import with `import ChatOpenAI from "@langchain/openai";`. import { updateEntrypointsFrom0_x_xTo0_2_x } from "@langchain/scripts/migrations";const pathToMyProject = "..."; // This path is used in the following glob pattern: `${projectPath}/**/*.{ts,tsx,js,jsx}`.updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, shouldLog: true,}); #### Other options[​](#other-options "Direct link to Other options") updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, tsConfigPath: "tsconfig.json", // Path to the tsConfig file. This will be used to load all the project files into the script. testRun: true, // If true, the script will not save any changes, but will log the changes that would be made. files: ["..."], // A list of .ts file paths to check. If this is provided, the script will only check these files.}); * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Overview ](/v0.2/docs/versions/overview)[ Next streamEvents v2 ](/v0.2/docs/versions/v0_2/migrating_astream_events) * [Migration](#migration) * [Upgrade to new imports](#upgrade-to-new-imports)
null
https://js.langchain.com/v0.2/docs/versions/release_policy
* [](/v0.2/) * Versions * Release Policy On this page LangChain releases ================== The LangChain ecosystem is composed of different component packages (e.g., `@langchain/core`, `langchain`, `@langchain/community`, `@langchain/langgraph`, partner packages etc.) Versioning[​](#versioning "Direct link to Versioning") ------------------------------------------------------ ### `langchain` and `@langchain/core`[​](#langchain-and-langchaincore "Direct link to langchain-and-langchaincore") `langchain` and `@langchain/core` follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0. Minor version increases will occur for: * Breaking changes for any public interfaces marked as `beta`. Patch version increases will occur for: * Bug fixes * New features * Any changes to private interfaces * Any changes to `beta` features When upgrading between minor versions, users should review the list of breaking changes and deprecations. From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**\-rc**.N**. For example, `0.2.0-rc.1`. If no issues are found, the release candidate will be released as a stable version with the same version number. \\If issues are found, we will release a new release candidate with an incremented `N` value (e.g., `0.2.0-rc.2`). ### Other packages in the langchain ecosystem[​](#other-packages-in-the-langchain-ecosystem "Direct link to Other packages in the langchain ecosystem") Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `@langchain/core`. Release cadence[​](#release-cadence "Direct link to Release cadence") --------------------------------------------------------------------- We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `@langchain/core` by at least 2-3 months, as such releases may contain breaking changes. Patch versions are released frequently as they contain bug fixes and new features. API stability[​](#api-stability "Direct link to API stability") --------------------------------------------------------------- The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `@langchain/core` will continue to evolve to better serve the needs of our users. Even though both `langchain` and `@langchain/core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages. * Breaking changes to the public API will result in a minor version bump (the second digit) * Any bug fixes or new features will result in a patch version bump (the third digit) We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed. ### Stability of other packages[​](#stability-of-other-packages "Direct link to Stability of other packages") The stability of other packages in the LangChain ecosystem may vary: * `@langchain/community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `@langchain/community`, `@langchain/community` is expected to experience more breaking changes than `langchain` and `@langchain/core` as it contains many community contributions. * Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable. ### What is a "API stability"?[​](#what-is-a-api-stability "Direct link to What is a \"API stability\"?") API stability means: * All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases. * If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete." * If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called. ### **APIs marked as internal**[​](#apis-marked-as-internal "Direct link to apis-marked-as-internal") Certain APIs are explicitly marked as “internal” in a couple of ways: * Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change. * Functions, methods, and other objects prefixed by a leading underscore (**`_`**). If any method starts with a single **`_`**, it’s an internal API. * **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are _meant_ to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain. Deprecation policy[​](#deprecation-policy "Direct link to Deprecation policy") ------------------------------------------------------------------------------ We will generally avoid deprecating features until a better alternative is available. When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `@langchain/core`. After that, the feature will be removed. Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated. In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous streamEvents v2 ](/v0.2/docs/versions/v0_2/migrating_astream_events)[ Next Packages ](/v0.2/docs/versions/packages) * [Versioning](#versioning) * [`langchain` and `@langchain/core`](#langchain-and-langchaincore) * [Other packages in the langchain ecosystem](#other-packages-in-the-langchain-ecosystem) * [Release cadence](#release-cadence) * [API stability](#api-stability) * [Stability of other packages](#stability-of-other-packages) * [What is a "API stability"?](#what-is-a-api-stability) * [**APIs marked as internal**](#apis-marked-as-internal) * [Deprecation policy](#deprecation-policy)
null
https://js.langchain.com/v0.2/docs/versions/packages
* [](/v0.2/) * Versions * Packages On this page 📕 Package versioning ===================== As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a maintainer and published to [NPM](https://npm.org/). The different packages are versioned slightly differently. `@langchain/core`[​](#langchaincore "Direct link to langchaincore") ------------------------------------------------------------------- `@langchain/core` is currently on version `0.1.x`. As `@langchain/core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything marked as `beta` (you can see this in the API reference and will see warnings when using such functionality). The reason for beta features is that given the rate of change of the field, being able to move quickly is still a priority. Minor version increases will occur for: * Breaking changes for any public interfaces marked as `beta`. Patch version increases will occur for: * Bug fixes * New features * Any changes to private interfaces * Any changes to `beta` features `langchain`[​](#langchain "Direct link to langchain") ----------------------------------------------------- `langchain` is currently on version `0.2.x` Minor version increases will occur for: * Breaking changes for any public interfaces NOT marked as `beta`. Patch version increases will occur for: * Bug fixes * New features * Any changes to private interfaces * Any changes to `beta` features. `@langchain/community`[​](#langchaincommunity "Direct link to langchaincommunity") ---------------------------------------------------------------------------------- `@langchain/community` is currently on version `0.2.x` All changes will be accompanied by the same type of version increase as changes in `langchain`. Partner Packages[​](#partner-packages "Direct link to Partner Packages") ------------------------------------------------------------------------ Partner packages are versioned independently. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Release Policy ](/v0.2/docs/versions/release_policy)[ Next Security ](/v0.2/docs/security) * [`@langchain/core`](#langchaincore) * [`langchain`](#langchain) * [`@langchain/community`](#langchaincommunity) * [Partner Packages](#partner-packages)
null
https://js.langchain.com/v0.2/docs/security
* [](/v0.2/) * Security On this page Security ======== LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources. Best Practices[​](#best-practices "Direct link to Best Practices") ------------------------------------------------------------------ When building such applications developers should remember to follow good security practices: * [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. * **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data. * [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_\(computing\)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. Risks of not doing so include, but are not limited to: * Data corruption or loss. * Unauthorized access to confidential information. * Compromised performance or availability of critical resources. Example scenarios with mitigation strategies: * A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. * A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. * A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications. Reporting a Vulnerability[​](#reporting-a-vulnerability "Direct link to Reporting a Vulnerability") --------------------------------------------------------------------------------------------------- Please report security vulnerabilities by email to [[email protected].](mailto:[email protected].) This will ensure the issue is promptly triaged and acted upon as needed. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Packages ](/v0.2/docs/versions/packages) * [Best Practices](#best-practices) * [Reporting a Vulnerability](#reporting-a-vulnerability)
null
https://js.langchain.com/v0.2/docs/how_to/graph_constructing
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to construct knowledge graphs On this page How to construct knowledge graphs ================================= In this guide we’ll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructed graph can then be used as knowledge base in a RAG application. At a high-level, the steps of constructing a knowledge are from text are: 1. Extracting structured information from text: Model is used to extract structured graph information from text. 2. Storing into graph database: Storing the extracted structured graph information into a graph database enables downstream RAG applications Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i langchain @langchain/community @langchain/openai neo4j-driver zod yarn add langchain @langchain/community @langchain/openai neo4j-driver zod pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI in this example: OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password" The below example will create a connection with a Neo4j database. import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password }); LLM Graph Transformer[​](#llm-graph-transformer "Direct link to LLM Graph Transformer") --------------------------------------------------------------------------------------- Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The LLMGraphTransformer converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data. import { ChatOpenAI } from "@langchain/openai";import { LLMGraphTransformer } from "@langchain/community/experimental/graph_transformers/llm";const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview",});const llmGraphTransformer = new LLMGraphTransformer({ llm: model,}); Now we can pass in example text and examine the results. import { Document } from "@langchain/core/documents";let text = `Marie Curie, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris.`;const result = await llmGraphTransformer.convertToGraphDocuments([ new Document({ pageContent: text }),]);console.log(`Nodes: ${result[0].nodes.length}`);console.log(`Relationships:${result[0].relationships.length}`); Nodes: 8Relationships:7 Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution. Examine the following image to better grasp the structure of the generated knowledge graph. ![graph_construction1.png](/v0.2/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png) Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements. const llmGraphTransformerFiltered = new LLMGraphTransformer({ llm: model, allowedNodes: ["PERSON", "COUNTRY", "ORGANIZATION"], allowedRelationships: ["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], strictMode: false,});const result_filtered = await llmGraphTransformerFiltered.convertToGraphDocuments([ new Document({ pageContent: text }), ]);console.log(`Nodes: ${result_filtered[0].nodes.length}`);console.log(`Relationships:${result_filtered[0].relationships.length}`); Nodes: 6Relationships:4 For a better understanding of the generated graph, we can again visualize it. ![graph_construction1.png](/v0.2/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png) Storing to graph database[​](#storing-to-graph-database "Direct link to Storing to graph database") --------------------------------------------------------------------------------------------------- The generated graph documents can be stored to a graph database using the `addGraphDocuments` method. await graph.addGraphDocuments(result_filtered); * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to build an LLM generated UI ](/v0.2/docs/how_to/generative_ui)[ Next How to map values to a database ](/v0.2/docs/how_to/graph_mapping) * [Setup](#setup) * [LLM Graph Transformer](#llm-graph-transformer) * [Storing to graph database](#storing-to-graph-database)
null
https://js.langchain.com/v0.2/docs/how_to/functions
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to run custom functions On this page How to run custom functions =========================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) You can use arbitrary functions as [Runnables](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html). This is useful for formatting or when you need functionality not provided by other LangChain components, and custom functions used as Runnables are called [`RunnableLambdas`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableLambda.html). Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple argument. This guide will cover: * How to explicitly create a runnable from a custom function using the `RunnableLambda` constructor * Coercion of custom functions into runnables when used in chains * How to accept and use run metadata in your custom function * How to stream with custom functions by having them return generators Using the constructor[​](#using-the-constructor "Direct link to Using the constructor") --------------------------------------------------------------------------------------- Below, we explicitly wrap our custom logic using a `RunnableLambda` method: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableLambda } from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";const lengthFunction = (input: { foo: string }): { length: string } => { return { length: input.foo.length.toString(), };};const model = new ChatOpenAI({ model: "gpt-4o" });const prompt = ChatPromptTemplate.fromTemplate("What is {length} squared?");const chain = RunnableLambda.from(lengthFunction) .pipe(prompt) .pipe(model) .pipe(new StringOutputParser());await chain.invoke({ foo: "bar" }); "3 squared is \\(3^2\\), which means multiplying 3 by itself. \n" + "\n" + "\\[3^2 = 3 \\times 3 = 9\\]\n" + "\n" + "So, 3 squared"... 6 more characters Automatic coercion in chains[​](#automatic-coercion-in-chains "Direct link to Automatic coercion in chains") ------------------------------------------------------------------------------------------------------------ When using custom functions in chains with [`RunnableSequence.from`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html#from) static method, you can omit the explicit `RunnableLambda` creation and rely on coercion. Here’s a simple example with a function that takes the output from the model and returns the first five letters of it: import { RunnableSequence } from "@langchain/core/runnables";const prompt = ChatPromptTemplate.fromTemplate( "Tell me a short story about {topic}");const model = new ChatOpenAI({ model: "gpt-4o" });const chainWithCoercedFunction = RunnableSequence.from([ prompt, model, (input) => input.content.slice(0, 5),]);await chainWithCoercedFunction.invoke({ topic: "bears" }); "Once " Note that we didn’t need to wrap the custom function `(input) => input.content.slice(0, 5)` in a `RunnableLambda` method. The custom function is **coerced** into a runnable. See [this section](/v0.2/docs/how_to/sequence/#coercion) for more information. Passing run metadata[​](#passing-run-metadata "Direct link to Passing run metadata") ------------------------------------------------------------------------------------ Runnable lambdas can optionally accept a [RunnableConfig](https://v02.api.js.langchain.com/interfaces/langchain_core_runnables.RunnableConfig.html) parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs. import { type RunnableConfig } from "@langchain/core/runnables";const echo = (text: string, config: RunnableConfig) => { const prompt = ChatPromptTemplate.fromTemplate( "Reverse the following text: {text}" ); const model = new ChatOpenAI({ model: "gpt-4o" }); const chain = prompt.pipe(model).pipe(new StringOutputParser()); return chain.invoke({ text }, config);};const output = await RunnableLambda.from(echo).invoke("foo", { tags: ["my-tag"], callbacks: [ { handleLLMEnd: (output) => console.log(output), }, ],}); { generations: [ [ { text: "oof", message: AIMessage { lc_serializable: true, lc_kwargs: [Object], lc_namespace: [Array], content: "oof", name: undefined, additional_kwargs: [Object], response_metadata: [Object], tool_calls: [], invalid_tool_calls: [] }, generationInfo: { finish_reason: "stop" } } ] ], llmOutput: { tokenUsage: { completionTokens: 2, promptTokens: 13, totalTokens: 15 } }} Streaming ========= You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain. The signature of these generators should be `AsyncGenerator<Input> -> AsyncGenerator<Output>`. These are useful for: - implementing a custom output parser - modifying the output of a previous step, while preserving streaming capabilities Here’s an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text: const prompt = ChatPromptTemplate.fromTemplate( "Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers");const strChain = prompt.pipe(model).pipe(new StringOutputParser());const stream = await strChain.stream({ animal: "bear" });for await (const chunk of stream) { console.log(chunk);} Lion, wolf, tiger, cougar, leopard Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list: // This is a custom parser that splits an iterator of llm tokens// into a list of strings separated by commasasync function* splitIntoList(input) { // hold partial input until we get a comma let buffer = ""; for await (const chunk of input) { // add current chunk to buffer buffer += chunk; // while there are commas in the buffer while (buffer.includes(",")) { // split buffer on comma const commaIndex = buffer.indexOf(","); // yield everything before the comma yield [buffer.slice(0, commaIndex).trim()]; // save the rest for the next iteration buffer = buffer.slice(commaIndex + 1); } } // yield the last chunk yield [buffer.trim()];}const listChain = strChain.pipe(splitIntoList);const stream = await listChain.stream({ animal: "bear" });for await (const chunk of stream) { console.log(chunk);} [ "wolf" ][ "lion" ][ "tiger" ][ "cougar" ][ "cheetah" ] Invoking it gives a full array of values: await listChain.invoke({ animal: "bear" }); [ "lion", "tiger", "wolf", "cougar", "jaguar" ] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you’ve learned a few different ways to use custom logic within your chains, and how to implement streaming. To learn more, see the other how-to guides on runnables in this section. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to filter messages ](/v0.2/docs/how_to/filter_messages)[ Next How to build an LLM generated UI ](/v0.2/docs/how_to/generative_ui) * [Using the constructor](#using-the-constructor) * [Automatic coercion in chains](#automatic-coercion-in-chains) * [Passing run metadata](#passing-run-metadata) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/embed_text
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to embed text data On this page How to embed text data ====================== info Head to [Integrations](/v0.2/docs/integrations/text_embedding) for documentation on built-in integrations with text embedding providers. Prerequisites This guide assumes familiarity with the following concepts: * [Embeddings](/v0.2/docs/concepts/#embedding-models) Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- Below is an example of how to use the OpenAI embeddings. Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a `embedQuery` and `embedDocuments` method. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai Get started[​](#get-started-1 "Direct link to Get started") ----------------------------------------------------------- import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings(); Embed queries[​](#embed-queries "Direct link to Embed queries") --------------------------------------------------------------- const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, ... 1511 more items]*/ Embed documents[​](#embed-documents "Direct link to Embed documents") --------------------------------------------------------------------- const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);/*[ [ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, ... 1511 more items ] [ -0.009446913, -0.013253193, 0.013174579, 0.0057552797, -0.038993083, 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, ... 1511 more items ]]*/ Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to use embeddings models with queries and text. Next, check out how to [avoid excessively recomputing embeddings with caching](/v0.2/docs/how_to/caching_embeddings), or the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to stream chat model responses ](/v0.2/docs/how_to/chat_streaming)[ Next How to use few shot examples in chat models ](/v0.2/docs/how_to/few_shot_examples_chat) * [Get started](#get-started) * [Get started](#get-started-1) * [Embed queries](#embed-queries) * [Embed documents](#embed-documents) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/generative_ui
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to build an LLM generated UI How to build an LLM generated UI ================================ This guide will walk through some high level concepts and code snippets for building generative UI's using LangChain.js. To see the full code for generative UI, [click here to visit our official LangChain Next.js template](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/generative_ui/README.md). The sample implements a tool calling agent, which outputs an interactive UI element when streaming intermediate outputs of tool calls to the client. We introduce two utilities which wraps the AI SDK to make it easier to yield React elements inside runnables and tool calls: [`createRunnableUI`](https://github.com/langchain-ai/langchain-nextjs-template/blob/7f764d558682214d50b064f4293667123a31e6fe/app/generative_ui/utils/server.tsx#L89) and [`streamRunnableUI`](https://github.com/langchain-ai/langchain-nextjs-template/blob/7f764d558682214d50b064f4293667123a31e6fe/app/generative_ui/utils/server.tsx#L126). * The `streamRunnableUI` executes the provided Runnable with `streamEvents` method and sends every `stream` event to the client via the React Server Components stream. * The `createRunnableUI` wraps the `createStreamableUI` function from AI SDK to properly hook into the Runnable event stream. The usage is then as follows: "use server";const tool = new DynamicStructuredTool({ // ... func: async (input, config) => { // create a new streamable UI and wire it up to the streamEvents const stream = createRunnableUI(config); stream.update(<div>Searching...</div>); const result = await images(input); // update the UI element with the rendered results stream.done( <Images images={result.images_results .map((image) => image.thumbnail) .slice(0, input.limit)} /> ); return `[Returned ${result.images_results.length} images]`; },});// add LLM, prompt, etc...const tools = [tool];export const agentExecutor = new AgentExecutor({ agent: createToolCallingAgent({ llm, tools, prompt }), tools,}); async function agent(inputs: { input: string }) { "use server"; return streamRunnableUI(agentExecutor, inputs);}export const EndpointsContext = exposeEndpoints({ agent }); In order to ensure all of the client components are included in the bundle, we need to wrap all of the Server Actions into `exposeEndpoints` method. These endpoints will be accessible from the client via the Context API, seen in the `useActions` hook. "use client";import type { EndpointsContext } from "./agent";export default function Page() { const actions = useActions<typeof EndpointsContext>(); const [node, setNode] = useState(); return ( <div> {node} <button onClick={async () => { setNode(await actions.agent({ input: "cats" })); }} > Get images of cats </button> </div> );} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to run custom functions ](/v0.2/docs/how_to/functions)[ Next How to construct knowledge graphs ](/v0.2/docs/how_to/graph_constructing)
null
https://js.langchain.com/v0.2/docs/how_to/filter_messages
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to filter messages On this page How to filter messages ====================== The `filterMessages` function is available in `@langchain/core` version `0.2.8` and above. In more complex chains and agents we might track state with a list of messages. This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent. The `filterMessages` utility makes it easy to filter messages by type, id, or name. Basic usage[​](#basic-usage "Direct link to Basic usage") --------------------------------------------------------- import { HumanMessage, SystemMessage, AIMessage, filterMessages,} from "@langchain/core/messages";const messages = [ new SystemMessage({ content: "you are a good assistant", id: "1" }), new HumanMessage({ content: "example input", id: "2", name: "example_user" }), new AIMessage({ content: "example output", id: "3", name: "example_assistant", }), new HumanMessage({ content: "real input", id: "4", name: "bob" }), new AIMessage({ content: "real output", id: "5", name: "alice" }),];filterMessages(messages, { includeTypes: ["human"] }); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: 'example input', id: '2', name: 'example_user', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'example input', name: 'example_user', additional_kwargs: {}, response_metadata: {}, id: '2' }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'real input', id: '4', name: 'bob', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real input', name: 'bob', additional_kwargs: {}, response_metadata: {}, id: '4' }] filterMessages(messages, { excludeNames: ["example_user", "example_assistant"],}); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: 'you are a good assistant', id: '1', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'you are a good assistant', name: undefined, additional_kwargs: {}, response_metadata: {}, id: '1' }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'real input', id: '4', name: 'bob', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real input', name: 'bob', additional_kwargs: {}, response_metadata: {}, id: '4' }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'real output', id: '5', name: 'alice', tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real output', name: 'alice', additional_kwargs: {}, response_metadata: {}, id: '5', tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }] filterMessages(messages, { includeTypes: [HumanMessage, AIMessage], excludeIds: ["3"],}); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: 'example input', id: '2', name: 'example_user', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'example input', name: 'example_user', additional_kwargs: {}, response_metadata: {}, id: '2' }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'real input', id: '4', name: 'bob', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real input', name: 'bob', additional_kwargs: {}, response_metadata: {}, id: '4' }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'real output', id: '5', name: 'alice', tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real output', name: 'alice', additional_kwargs: {}, response_metadata: {}, id: '5', tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }] Chaining[​](#chaining "Direct link to Chaining") ------------------------------------------------ `filterMessages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain: import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});// Notice we don't pass in messages. This creates// a RunnableLambda that takes messages as inputconst filter_ = filterMessages({ excludeNames: ["example_user", "example_assistant"], end,});const chain = filter_.pipe(llm);await chain.invoke(messages); AIMessage { lc_serializable: true, lc_kwargs: { content: [], additional_kwargs: { id: 'msg_01S2LQc1NLhtPHurW3jNRsCK', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: [Object] }, tool_calls: [], usage_metadata: { input_tokens: 16, output_tokens: 3, total_tokens: 19 }, invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: [], name: undefined, additional_kwargs: { id: 'msg_01S2LQc1NLhtPHurW3jNRsCK', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 16, output_tokens: 3 } }, response_metadata: { id: 'msg_01S2LQc1NLhtPHurW3jNRsCK', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 16, output_tokens: 3 } }, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: { input_tokens: 16, output_tokens: 3, total_tokens: 19 }} Looking at [the LangSmith trace](https://smith.langchain.com/public/a48c7935-04a8-4e87-9893-b14064ddbfc4/r) we can see that before the messages are passed to the model they are filtered. Looking at just the filter\_, we can see that it’s a Runnable object that can be invoked like all Runnables: await filter_.invoke(messages); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: 'you are a good assistant', id: '1', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'you are a good assistant', name: undefined, additional_kwargs: {}, response_metadata: {}, id: '1' }, HumanMessage { lc_serializable: true, lc_kwargs: { content: 'real input', id: '4', name: 'bob', additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real input', name: 'bob', additional_kwargs: {}, response_metadata: {}, id: '4' }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'real output', id: '5', name: 'alice', tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: 'real output', name: 'alice', additional_kwargs: {}, response_metadata: {}, id: '5', tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }] API reference[​](#api-reference "Direct link to API reference") --------------------------------------------------------------- For a complete description of all arguments head to the [API reference](https://api.js.langchain.com/functions/langchain_core_messages.filterMessages.html). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Few Shot Prompt Templates ](/v0.2/docs/how_to/few_shot)[ Next How to run custom functions ](/v0.2/docs/how_to/functions) * [Basic usage](#basic-usage) * [Chaining](#chaining) * [API reference](#api-reference)
null
https://js.langchain.com/v0.2/docs/how_to/chat_streaming
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to stream chat model responses On this page How to stream chat model responses ================================== All [chat models](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html) implement the [Runnable interface](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html), which comes with a **default** implementations of standard runnable methods (i.e. `invoke`, `batch`, `stream`, `streamEvents`). The **default** streaming implementation provides an `AsyncGenerator` that yields a single value: the final output from the underlying chat model provider. tip The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface. The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support. See which [integrations support token-by-token streaming here](/v0.2/docs/integrations/chat/). Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- Below, we use a `---` to help visualize the delimiter between tokens. ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); for await (const chunk of await model.stream( "Write me a 1 verse song about goldfish on the moon")) { console.log(`${chunk.content}---`);} ---Here--- is--- a------1------verse--- song--- about--- gol---dfish--- on--- the--- moon---:---Gol---dfish--- on--- the--- moon---,--- swimming--- through--- the--- sk---ies---,---Floating--- in--- the--- darkness---,--- beneath--- the--- lunar--- eyes---.---Weight---less--- as--- they--- drift---,--- through--- the--- endless--- voi---d,---D---rif---ting---,--- swimming---,--- exploring---,--- this--- new--- worl---d unexp---lo---ye---d.--------- Stream events[​](#stream-events "Direct link to Stream events") --------------------------------------------------------------- Chat models also support the standard [streamEvents()](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#streamEvents) method. This method is useful if you’re streaming output from a larger LLM application that contains multiple steps (e.g., a chain composed of a prompt, chat model and parser). let idx = 0;for await (const event of model.streamEvents( "Write me a 1 verse song about goldfish on the moon", { version: "v1", })) { idx += 1; if (idx >= 5) { console.log("...Truncated"); break; } console.log(event);} { run_id: "a84e1294-d281-4757-8f3f-dc4440612949", event: "on_llm_start", name: "ChatAnthropic", tags: [], metadata: {}, data: { input: "Write me a 1 verse song about goldfish on the moon" }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { id: "msg_01DqDQ9in33ZhmrCzdZaRNMZ", type: "message", role: "assistant", model: "claude-3-haiku-20240307" }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { id: "msg_01DqDQ9in33ZhmrCzdZaRNMZ", type: "message", role: "assistant", model: "claude-3-haiku-20240307" }, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Here", additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Here", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " is", additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " is", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}...Truncated Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now seen a few ways you can stream chat model responses. Next, check out this guide for more on [streaming with other LangChain modules](/v0.2/docs/how_to/streaming). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to stream responses from an LLM ](/v0.2/docs/how_to/streaming_llm)[ Next How to embed text data ](/v0.2/docs/how_to/embed_text) * [Streaming](#streaming) * [Stream events](#stream-events) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/graph_semantic
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add a semantic layer over the database On this page How to add a semantic layer over the database ============================================= You can use database queries to retrieve information from a graph database like Neo4j. One option is to use LLMs to generate Cypher statements. While that option provides excellent flexibility, the solution could be brittle and not consistently generating precise Cypher statements. Instead of generating Cypher statements, we can implement Cypher templates as tools in a semantic layer that an LLM agent can interact with. ![graph_semantic.png](/v0.2/assets/images/graph_semantic-365248d76b7862193c33f44eaa6ecaeb.png) Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i langchain @langchain/community @langchain/openai neo4j-driver zod yarn add langchain @langchain/community @langchain/openai neo4j-driver zod pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI in this example: OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password" The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors. import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery); Schema refreshed successfully. [] Custom tools with Cypher templates[​](#custom-tools-with-cypher-templates "Direct link to Custom tools with Cypher templates") ------------------------------------------------------------------------------------------------------------------------------ A semantic layer consists of various tools exposed to an LLM that it can use to interact with a knowledge graph. They can be of various complexity. You can think of each tool in a semantic layer as a function. The function we will implement is to retrieve information about movies or their cast. const descriptionQuery = `MATCH (m:Movie|Person)WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidateMATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as namesWITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as typesWITH m, collect(types) as contextsWITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name) + "\nyear: "+coalesce(m.released,"") +"\n" + reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as contextRETURN context LIMIT 1`;const getInformation = async (entity: string) => { try { const data = await graph.query(descriptionQuery, { candidate: entity }); return data[0]["context"]; } catch (error) { return "No information was found"; }}; You can observe that we have defined the Cypher statement used to retrieve information. Therefore, we can avoid generating Cypher statements and use the LLM agent to only populate the input parameters. To provide additional information to an LLM agent about when to use the tool and their input parameters, we wrap the function as a tool. import { StructuredTool } from "@langchain/core/tools";import { z } from "zod";const informationInput = z.object({ entity: z.string().describe("movie or a person mentioned in the question"),});class InformationTool extends StructuredTool { schema = informationInput; name = "Information"; description = "useful for when you need to answer questions about various actors or movies"; async _call(input: z.infer<typeof informationInput>): Promise<string> { return getInformation(input.entity); }} OpenAI Agent[​](#openai-agent "Direct link to OpenAI Agent") ------------------------------------------------------------ LangChain expression language makes it very convenient to define an agent to interact with a graph database over the semantic layer. import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { formatToOpenAIFunctionMessages } from "langchain/agents/format_scratchpad";import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser";import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";import { RunnableSequence } from "@langchain/core/runnables";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const tools = [new InformationTool()];const llmWithTools = llm.bind({ functions: tools.map(convertToOpenAIFunction),});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant that finds information about movies and recommends them. If tools require follow up questions, make sure to ask the user for clarification. Make sure to include any available options that need to be clarified in the follow up questions Do only the things the user specifically requested.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const _formatChatHistory = (chatHistory) => { const buffer: Array<BaseMessage> = []; for (const [human, ai] of chatHistory) { buffer.push(new HumanMessage({ content: human })); buffer.push(new AIMessage({ content: ai })); } return buffer;};const agent = RunnableSequence.from([ { input: (x) => x.input, chat_history: (x) => { if ("chat_history" in x) { return _formatChatHistory(x.chat_history); } return []; }, agent_scratchpad: (x) => { if ("steps" in x) { return formatToOpenAIFunctionMessages(x.steps); } return []; }, }, prompt, llmWithTools, new OpenAIFunctionsAgentOutputParser(),]);const agentExecutor = new AgentExecutor({ agent, tools }); await agentExecutor.invoke({ input: "Who played in Casino?" }); { input: "Who played in Casino?", output: 'The movie "Casino" starred James Woods, Joe Pesci, Robert De Niro, and Sharon Stone.'} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to improve results with prompting ](/v0.2/docs/how_to/graph_prompting)[ Next How to reindex data to keep your vectorstore in-sync with the underlying data source ](/v0.2/docs/how_to/indexing) * [Setup](#setup) * [Custom tools with Cypher templates](#custom-tools-with-cypher-templates) * [OpenAI Agent](#openai-agent)
null
https://js.langchain.com/v0.2/docs/introduction/#__docusaurus_skipToContent_fallback
* [](/v0.2/) * Introduction On this page Introduction ============ **LangChain** is a framework for developing applications powered by large language models (LLMs). LangChain simplifies every stage of the LLM application lifecycle: * **Development**: Build your applications using LangChain's open-source [building blocks](/v0.2/docs/how_to/#langchain-expression-language-lcel) and [components](/v0.2/docs/how_to/). Hit the ground running using [third-party integrations](/v0.2/docs/integrations/platforms/). * **Productionization**: Use [LangSmith](https://docs.smith.langchain.com) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. * **Deployment**: Turn any chain into an API with [LangServe](https://www.langchain.com/langserve). ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview") Concretely, the framework consists of the following open-source libraries: * **`@langchain/core`**: Base abstractions and LangChain Expression Language. * **`@langchain/community`**: Third party integrations. * Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**. * **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. * **[langgraph](https://www.langchain.com/langserveh)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. * **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications. note These docs focus on the JavaScript LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library. [Tutorials](/v0.2/docs/tutorials)[​](#tutorials "Direct link to tutorials") --------------------------------------------------------------------------- If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials](/v0.2/docs/tutorials). This is the best place to get started. These are the best ones to get started with: * [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain) * [Build a Chatbot](/v0.2/docs/tutorials/chatbot) * [Build an Agent](/v0.2/docs/tutorials/agents) Explore the full list of tutorials [here](/v0.2/docs/tutorials). [How-To Guides](/v0.2/docs/how_to/)[​](#how-to-guides "Direct link to how-to-guides") ------------------------------------------------------------------------------------- [Here](/v0.2/docs/how_to/) you'll find short answers to “How do I….?” types of questions. These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/v0.2/docs/tutorials) and the [API Reference](https://v02.api.js.langchain.com). However, these guides will help you quickly accomplish common tasks. [Conceptual Guide](/v0.2/docs/concepts)[​](#conceptual-guide "Direct link to conceptual-guide") ----------------------------------------------------------------------------------------------- Introductions to all the key parts of LangChain you'll need to know! [Here](/v0.2/docs/concepts) you'll find high level explanations of all LangChain concepts. [API reference](https://api.js.langchain.com)[​](#api-reference "Direct link to api-reference") ----------------------------------------------------------------------------------------------- Head to the reference section for full documentation of all classes and methods in the LangChain Python packages. Ecosystem[​](#ecosystem "Direct link to Ecosystem") --------------------------------------------------- ### [🦜🛠️ LangSmith](https://docs.smith.langchain.com)[​](#️-langsmith "Direct link to ️-langsmith") Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production. ### [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraphjs/)[​](#️-langgraph "Direct link to ️-langgraph") Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives. Additional resources[​](#additional-resources "Direct link to Additional resources") ------------------------------------------------------------------------------------ ### [Security](/v0.2/docs/security)[​](#security "Direct link to security") Read up on our [Security](/v0.2/docs/security) best practices to make sure you're developing safely with LangChain. ### [Integrations](/v0.2/docs/integrations/platforms/)[​](#integrations "Direct link to integrations") LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.2/docs/integrations/platforms/). ### [Contributing](/v0.2/docs/contributing)[​](#contributing "Direct link to contributing") Check out the developer's guide for guidelines on contributing and help getting your dev environment set up. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Next Tutorials ](/v0.2/docs/tutorials/) * [Tutorials](#tutorials) * [How-To Guides](#how-to-guides) * [Conceptual Guide](#conceptual-guide) * [API reference](#api-reference) * [Ecosystem](#ecosystem) * [🦜🛠️ LangSmith](#️-langsmith) * [🦜🕸️ LangGraph](#️-langgraph) * [Additional resources](#additional-resources) * [Security](#security) * [Integrations](#integrations) * [Contributing](#contributing)
null
https://js.langchain.com/v0.2/docs/how_to/graph_mapping
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to map values to a database On this page How to map values to a database =============================== In this guide we’ll go over strategies to improve graph database query generation by mapping values from user inputs to database. When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database. Therefore, we can introduce a new step in graph database QA system to accurately map values. Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i langchain @langchain/community @langchain/openai neo4j-driver zod yarn add langchain @langchain/community @langchain/openai neo4j-driver zod pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI in this example: OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password" The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors. import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery); Schema refreshed successfully. [] Detecting entities in the user input[​](#detecting-entities-in-the-user-input "Direct link to Detecting entities in the user input") ------------------------------------------------------------------------------------------------------------------------------------ We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database. import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const entities = z .object({ names: z .array(z.string()) .describe("All the person or movies appearing in the text"), }) .describe("Identifying information about entities.");const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are extracting person and movies from the text."], [ "human", "Use the given format to extract information from the following\ninput: {question}", ],]);const entityChain = prompt.pipe(llm.withStructuredOutput(entities)); We can test the entity extraction chain. const entities = await entityChain.invoke({ question: "Who played in Casino movie?",});entities; { names: [ "Casino" ] } We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings. const matchQuery = `MATCH (p:Person|Movie)WHERE p.name CONTAINS $value OR p.title CONTAINS $valueRETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS typeLIMIT 1`;const matchToDatabase = async (values) => { let result = ""; for (const entity of values.names) { const response = await graph.query(matchQuery, { value: entity, }); if (response.length > 0) { result += `${entity} maps to ${response[0]["result"]} ${response[0]["type"]} in database\n`; } } return result;};await matchToDatabase(entities); "Casino maps to Casino Movie in database\n" Custom Cypher generating chain[​](#custom-cypher-generating-chain "Direct link to Custom Cypher generating chain") ------------------------------------------------------------------------------------------------------------------ We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement. We will be using the LangChain expression language to accomplish that. import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";// Generate Cypher statement based on natural language inputconst cypherTemplate = `Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:{schema}Entities in the question map to the following database values:{entities_list}Question: {question}Cypher query:`;const cypherPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Given an input question, convert it to a Cypher query. No pre-amble.", ], ["human", cypherTemplate],]);const llmWithStop = llm.bind({ stop: ["\nCypherResult:"] });const cypherResponse = RunnableSequence.from([ RunnablePassthrough.assign({ names: entityChain }), RunnablePassthrough.assign({ entities_list: async (x) => matchToDatabase(x.names), schema: async (_) => graph.getSchema(), }), cypherPrompt, llmWithStop, new StringOutputParser(),]); const cypher = await cypherResponse.invoke({ question: "Who played in Casino movie?",});cypher; 'MATCH (:Movie {title: "Casino"})<-[:ACTED_IN]-(actor)\nRETURN actor.name' * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to construct knowledge graphs ](/v0.2/docs/how_to/graph_constructing)[ Next How to improve results with prompting ](/v0.2/docs/how_to/graph_prompting) * [Setup](#setup) * [Detecting entities in the user input](#detecting-entities-in-the-user-input) * [Custom Cypher generating chain](#custom-cypher-generating-chain)
null
https://js.langchain.com/v0.2/docs/how_to/graph_prompting
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to improve results with prompting On this page How to improve results with prompting ===================================== In this guide we’ll go over prompting strategies to improve graph database query generation. We’ll largely focus on methods for getting relevant database-specific information in your prompt. Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i langchain @langchain/community @langchain/openai neo4j-driver yarn add langchain @langchain/community @langchain/openai neo4j-driver pnpm add langchain @langchain/community @langchain/openai neo4j-driver #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI in this example: OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password" The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors. const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD"); import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery); Schema refreshed successfully. [] Filtering graph schema ====================== At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements. Let’s say we are dealing with the following graph schema: await graph.refreshSchema();console.log(graph.getSchema()); Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}, Chunk {embedding: LIST, id: STRING, text: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie) Few-shot examples[​](#few-shot-examples "Direct link to Few-shot examples") --------------------------------------------------------------------------- Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries. Let’s say we have the following examples: const examples = [ { question: "How many artists are there?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", }, { question: "Which actors played in the movie Casino?", query: "MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name", }, { question: "How many movies has Tom Hanks acted in?", query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", }, { question: "List all the genres of the movie Schindler's List", query: "MATCH (m:Movie {{title: 'Schindler\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name", }, { question: "Which actors have worked in movies from both the comedy and action genres?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name", }, { question: "Which directors have made movies with at least three different actors named 'John'?", query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name", }, { question: "Identify movies where directors also played a role in the film.", query: "MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name", }, { question: "Find the actor with the highest number of movies in the database.", query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1", },]; We can create a few-shot prompt with them like so: import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "User input: {question}\nCypher query: {query}");const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],}); console.log( await prompt.format({ question: "How many artists are there?", schema: "foo", })); You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: Which actors played in the movie Casino?Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.nameUser input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: List all the genres of the movie Schindler's ListCypher query: MATCH (m:Movie {title: 'Schindler\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.nameUser input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: How many artists are there?Cypher query: Dynamic few-shot examples[​](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples") --------------------------------------------------------------------------------------------------- If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don’t fit in the model’s context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input. We can do just this using an ExampleSelector. In this case we’ll use a [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: import { OpenAIEmbeddings } from "@langchain/openai";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( examples, new OpenAIEmbeddings(), Neo4jVectorStore, { k: 5, inputKeys: ["question"], preDeleteCollection: true, url, username, password, }); await exampleSelector.selectExamples({ question: "how many artists are there?",}); [ { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", question: "How many artists are there?" }, { query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", question: "How many movies has Tom Hanks acted in?" }, { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE"... 84 more characters, question: "Which actors have worked in movies from both the comedy and action genres?" }, { query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH"... 71 more characters, question: "Which directors have made movies with at least three different actors named 'John'?" }, { query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DES"... 9 more characters, question: "Find the actor with the highest number of movies in the database." }] To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate: const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],}); console.log( await prompt.format({ question: "how many artists are there?", schema: "foo", })); You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: Which directors have made movies with at least three different actors named 'John'?Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.nameUser input: Find the actor with the highest number of movies in the database.Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1User input: how many artists are there?Cypher query: import { ChatOpenAI } from "@langchain/openai";import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = GraphCypherQAChain.fromLLM({ graph, llm, cypherPrompt: prompt,}); await chain.invoke({ query: "How many actors are in the graph?",}); { result: "There are 967 actors in the graph." } * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to map values to a database ](/v0.2/docs/how_to/graph_mapping)[ Next How to add a semantic layer over the database ](/v0.2/docs/how_to/graph_semantic) * [Setup](#setup) * [Few-shot examples](#few-shot-examples) * [Dynamic few-shot examples](#dynamic-few-shot-examples)
null
https://js.langchain.com/v0.2/docs/how_to/streaming_llm
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to stream responses from an LLM On this page How to stream responses from an LLM =================================== All [`LLM`s](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html) implement the [Runnable interface](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html), which comes with **default** implementations of standard runnable methods (i.e. `ainvoke`, `batch`, `abatch`, `stream`, `astream`, `astream_events`). The **default** streaming implementations provide an `AsyncGenerator` that yields a single value: the final output from the underlying chat model provider. The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support. See which [integrations support token-by-token streaming here](/v0.2/docs/integrations/llms/). :::{.callout-note} The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface. ::: Using `.stream()`[​](#using-stream "Direct link to using-stream") ----------------------------------------------------------------- The easiest way to stream is to use the `.stream()` method. This returns an readable stream that you can also iterate over: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";const model = new OpenAI({ maxTokens: 25,});const stream = await model.stream("Tell me a joke.");for await (const chunk of stream) { console.log(chunk);}/*Q: What did the fish say when it hit the wall?A: Dam!*/ #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` For models that do not support streaming, the entire response will be returned as a single chunk. Using a callback handler[​](#using-a-callback-handler "Direct link to Using a callback handler") ------------------------------------------------------------------------------------------------ You can also use a [`CallbackHandler`](https://v02.api.js.langchain.com/classes/langchain_core_callbacks_base.BaseCallbackHandler.html) like so: import { OpenAI } from "@langchain/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.invoke("Tell me a joke.", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide.*/ #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` We still have access to the end `LLMResult` if using `generate`. However, `tokenUsage` may not be currently supported for all model providers when streaming. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Installation ](/v0.2/docs/how_to/installation)[ Next How to stream chat model responses ](/v0.2/docs/how_to/chat_streaming) * [Using `.stream()`](#using-stream) * [Using a callback handler](#using-a-callback-handler)
null
https://js.langchain.com/v0.2/docs/how_to/lcel_cheatsheet
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * LangChain Expression Language Cheatsheet On this page LangChain Expression Language Cheatsheet ======================================== This is a quick reference for all the most important LCEL primitives. For more advanced usage see the [LCEL how-to guides](/v0.2/docs/how_to/#langchain-expression-language-lcel) and the [full API reference](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html). ### Invoke a runnable[​](#invoke-a-runnable "Direct link to Invoke a runnable") #### [runnable.invoke()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#invoke)[​](#runnable.invoke "Direct link to runnable.invoke") import { RunnableLambda } from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: number) => x.toString());await runnable.invoke(5); "5" ### Batch a runnable[​](#batch-a-runnable "Direct link to Batch a runnable") #### [runnable.batch()](hhttps://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#batch)[​](#runnable.batch "Direct link to runnable.batch") import { RunnableLambda } from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: number) => x.toString());await runnable.batch([7, 8, 9]); [ "7", "8", "9" ] ### Stream a runnable[​](#stream-a-runnable "Direct link to Stream a runnable") #### [runnable.stream()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#stream)[​](#runnable.stream "Direct link to runnable.stream") import { RunnableLambda } from "@langchain/core/runnables";async function* generatorFn(x: number[]) { for (const i of x) { yield i.toString(); }}const runnable = RunnableLambda.from(generatorFn);const stream = await runnable.stream([0, 1, 2, 3, 4]);for await (const chunk of stream) { console.log(chunk); console.log("---");} 0---1---2---3---4--- ### Compose runnables[​](#compose-runnables "Direct link to Compose runnables") #### [runnable.pipe()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#pipe)[​](#runnable.pipe "Direct link to runnable.pipe") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: any) => { return { foo: x };});const runnable2 = RunnableLambda.from((x: any) => [x].concat([x]));const chain = runnable1.pipe(runnable2);await chain.invoke(2); [ { foo: 2 }, { foo: 2 } ] #### [RunnableSequence.from()](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html#from)[​](#runnablesequence.from "Direct link to runnablesequence.from") import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: any) => { return { foo: x };});const runnable2 = RunnableLambda.from((x: any) => [x].concat([x]));const chain = RunnableSequence.from([runnable1, runnable2]);await chain.invoke(2); [ { foo: 2 }, { foo: 2 } ] ### Invoke runnables in parallel[​](#invoke-runnables-in-parallel "Direct link to Invoke runnables in parallel") #### [RunnableParallel](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableParallel.html)[​](#runnableparallel "Direct link to runnableparallel") import { RunnableLambda, RunnableParallel } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: any) => { return { foo: x };});const runnable2 = RunnableLambda.from((x: any) => [x].concat([x]));const chain = RunnableParallel.from({ first: runnable1, second: runnable2,});await chain.invoke(2); { first: { foo: 2 }, second: [ 2, 2 ] } ### Turn a function into a runnable[​](#turn-a-function-into-a-runnable "Direct link to Turn a function into a runnable") #### [RunnableLambda](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableLambda.html)[​](#runnablelambda "Direct link to runnablelambda") import { RunnableLambda } from "@langchain/core/runnables";const adder = (x: number) => { return x + 5;};const runnable = RunnableLambda.from(adder);await runnable.invoke(5); 10 ### Merge input and output dicts[​](#merge-input-and-output-dicts "Direct link to Merge input and output dicts") #### [RunnablePassthrough.assign()](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html#assign)[​](#runnablepassthrough.assign "Direct link to runnablepassthrough.assign") import { RunnableLambda, RunnablePassthrough } from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: { foo: number }) => { return x.foo + 7;});const chain = RunnablePassthrough.assign({ bar: runnable,});await chain.invoke({ foo: 10 }); { foo: 10, bar: 17 } ### Include input dict in output dict[​](#include-input-dict-in-output-dict "Direct link to Include input dict in output dict") #### [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html)[​](#runnablepassthrough "Direct link to runnablepassthrough") import { RunnableLambda, RunnableParallel, RunnablePassthrough,} from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: { foo: number }) => { return x.foo + 7;});const chain = RunnableParallel.from({ bar: runnable, baz: new RunnablePassthrough(),});await chain.invoke({ foo: 10 }); { baz: { foo: 10 }, bar: 17 } ### Add default invocation args[​](#add-default-invocation-args "Direct link to Add default invocation args") #### [runnable.bind()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#bind)[​](#runnable.bind "Direct link to runnable.bind") import { type RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const branchedFn = (mainArg: Record<string, any>, config?: RunnableConfig) => { if (config?.configurable?.boundKey !== undefined) { return { ...mainArg, boundKey: config?.configurable?.boundKey }; } return mainArg;};const runnable = RunnableLambda.from(branchedFn);const boundRunnable = runnable.bind({ configurable: { boundKey: "goodbye!" } });await boundRunnable.invoke({ bar: "hello" }); { bar: "hello", boundKey: "goodbye!" } ### Add fallbacks[​](#add-fallbacks "Direct link to Add fallbacks") #### [runnable.withFallbacks()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#withFallbacks)[​](#runnable.withfallbacks "Direct link to runnable.withfallbacks") import { RunnableLambda } from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: any) => { throw new Error("Error case");});const fallback = RunnableLambda.from((x: any) => x + x);const chain = runnable.withFallbacks({ fallbacks: [fallback] });await chain.invoke("foo"); "foofoo" ### Add retries[​](#add-retries "Direct link to Add retries") #### [runnable.withRetry()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#withRetry)[​](#runnable.withretry "Direct link to runnable.withretry") import { RunnableLambda } from "@langchain/core/runnables";let counter = 0;const retryFn = (_: any) => { counter++; console.log(`attempt with counter ${counter}`); throw new Error("Expected error");};const chain = RunnableLambda.from(retryFn).withRetry({ stopAfterAttempt: 2,});await chain.invoke(2); attempt with counter 1attempt with counter 2 Error: Expected error ### Configure runnable execution[​](#configure-runnable-execution "Direct link to Configure runnable execution") #### [RunnableConfig](https://api.js.langchain.com/interfaces/langchain_core_runnables.RunnableConfig.html)[​](#runnableconfig "Direct link to runnableconfig") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from(async (x: any) => { await new Promise((resolve) => setTimeout(resolve, 2000)); return { foo: x };});// Takes 4 secondsawait runnable1.batch([1, 2, 3], { maxConcurrency: 2 }); [ { foo: 1 }, { foo: 2 }, { foo: 3 } ] ### Add default config to runnable[​](#add-default-config-to-runnable "Direct link to Add default config to runnable") #### [runnable.withConfig()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#withConfig)[​](#runnable.withconfig "Direct link to runnable.withconfig") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from(async (x: any) => { await new Promise((resolve) => setTimeout(resolve, 2000)); return { foo: x };}).withConfig({ maxConcurrency: 2,});// Takes 4 secondsawait runnable1.batch([1, 2, 3]); [ { foo: 1 }, { foo: 2 }, { foo: 3 } ] ### Build a chain dynamically based on input[​](#build-a-chain-dynamically-based-on-input "Direct link to Build a chain dynamically based on input") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: any) => { return { foo: x };});const runnable2 = RunnableLambda.from((x: any) => [x].concat([x]));const chain = RunnableLambda.from((x: number): any => { if (x > 6) { return runnable1; } return runnable2;});await chain.invoke(7); { foo: 7 } await chain.invoke(5); [ 5, 5 ] ### Generate a stream of internal events[​](#generate-a-stream-of-internal-events "Direct link to Generate a stream of internal events") #### [runnable.streamEvents()](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#streamEvents)[​](#runnable.streamevents "Direct link to runnable.streamevents") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: number) => { return { foo: x, };}).withConfig({ runName: "first",});async function* generatorFn(x: { foo: number }) { for (let i = 0; i < x.foo; i++) { yield i.toString(); }}const runnable2 = RunnableLambda.from(generatorFn).withConfig({ runName: "second",});const chain = runnable1.pipe(runnable2);for await (const event of chain.streamEvents(2, { version: "v1" })) { console.log( `event=${event.event} | name=${event.name} | data=${JSON.stringify( event.data )}` );} event=on_chain_start | name=RunnableSequence | data={"input":2}event=on_chain_start | name=first | data={}event=on_chain_stream | name=first | data={"chunk":{"foo":2}}event=on_chain_start | name=second | data={}event=on_chain_end | name=first | data={"input":2,"output":{"foo":2}}event=on_chain_stream | name=second | data={"chunk":"0"}event=on_chain_stream | name=RunnableSequence | data={"chunk":"0"}event=on_chain_stream | name=second | data={"chunk":"1"}event=on_chain_stream | name=RunnableSequence | data={"chunk":"1"}event=on_chain_end | name=second | data={"output":"01"}event=on_chain_end | name=RunnableSequence | data={"output":"01"} ### Return a subset of keys from output object[​](#return-a-subset-of-keys-from-output-object "Direct link to Return a subset of keys from output object") #### [runnable.pick()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#pick)[​](#runnable.pick "Direct link to runnable.pick") import { RunnableLambda, RunnablePassthrough } from "@langchain/core/runnables";const runnable = RunnableLambda.from((x: { baz: number }) => { return x.baz + 5;});const chain = RunnablePassthrough.assign({ foo: runnable,}).pick(["foo", "bar"]);await chain.invoke({ bar: "hi", baz: 2 }); { foo: 7, bar: "hi" } ### Declaratively make a batched version of a runnable[​](#declaratively-make-a-batched-version-of-a-runnable "Direct link to Declaratively make a batched version of a runnable") #### [`runnable.map()`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#map)[​](#runnable.map "Direct link to runnable.map") import { RunnableLambda } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: number) => [...Array(x).keys()]);const runnable2 = RunnableLambda.from((x: number) => x + 5);const chain = runnable1.pipe(runnable2.map());await chain.invoke(3); [ 5, 6, 7 ] ### Get a graph representation of a runnable[​](#get-a-graph-representation-of-a-runnable "Direct link to Get a graph representation of a runnable") #### [runnable.getGraph()](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#getGraph)[​](#runnable.getgraph "Direct link to runnable.getgraph") import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables";const runnable1 = RunnableLambda.from((x: any) => { return { foo: x };});const runnable2 = RunnableLambda.from((x: any) => [x].concat([x]));const runnable3 = RunnableLambda.from((x: any) => x.toString());const chain = RunnableSequence.from([ runnable1, { second: runnable2, third: runnable3, },]);await chain.getGraph(); Graph { nodes: { "935c67df-7ae3-4853-9d26-579003c08407": { id: "935c67df-7ae3-4853-9d26-579003c08407", data: { name: "RunnableLambdaInput", schema: ZodAny { spa: [Function: bound safeParseAsync] AsyncFunction, _def: [Object], parse: [Function: bound parse], safeParse: [Function: bound safeParse], parseAsync: [Function: bound parseAsync] AsyncFunction, safeParseAsync: [Function: bound safeParseAsync] AsyncFunction, refine: [Function: bound refine], refinement: [Function: bound refinement], superRefine: [Function: bound superRefine], optional: [Function: bound optional], nullable: [Function: bound nullable], nullish: [Function: bound nullish], array: [Function: bound array], promise: [Function: bound promise], or: [Function: bound or], and: [Function: bound and], transform: [Function: bound transform], brand: [Function: bound brand], default: [Function: bound default], catch: [Function: bound catch], describe: [Function: bound describe], pipe: [Function: bound pipe], readonly: [Function: bound readonly], isNullable: [Function: bound isNullable], isOptional: [Function: bound isOptional], _any: true } } }, "a73d7b3e-0ed7-46cf-b141-de64ea1e12de": { id: "a73d7b3e-0ed7-46cf-b141-de64ea1e12de", data: RunnableLambda { lc_serializable: false, lc_kwargs: { func: [Function (anonymous)] }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "runnables" ], func: [Function (anonymous)] } }, "ff104b34-c13b-4677-8b82-af70d3548e12": { id: "ff104b34-c13b-4677-8b82-af70d3548e12", data: RunnableMap { lc_serializable: true, lc_kwargs: { steps: [Object] }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "runnables" ], steps: { second: [RunnableLambda], third: [RunnableLambda] } } }, "2dc627dc-1c06-45b1-b14f-bb1f6e689f83": { id: "2dc627dc-1c06-45b1-b14f-bb1f6e689f83", data: { name: "RunnableMapOutput", schema: ZodAny { spa: [Function: bound safeParseAsync] AsyncFunction, _def: [Object], parse: [Function: bound parse], safeParse: [Function: bound safeParse], parseAsync: [Function: bound parseAsync] AsyncFunction, safeParseAsync: [Function: bound safeParseAsync] AsyncFunction, refine: [Function: bound refine], refinement: [Function: bound refinement], superRefine: [Function: bound superRefine], optional: [Function: bound optional], nullable: [Function: bound nullable], nullish: [Function: bound nullish], array: [Function: bound array], promise: [Function: bound promise], or: [Function: bound or], and: [Function: bound and], transform: [Function: bound transform], brand: [Function: bound brand], default: [Function: bound default], catch: [Function: bound catch], describe: [Function: bound describe], pipe: [Function: bound pipe], readonly: [Function: bound readonly], isNullable: [Function: bound isNullable], isOptional: [Function: bound isOptional], _any: true } } } }, edges: [ { source: "935c67df-7ae3-4853-9d26-579003c08407", target: "a73d7b3e-0ed7-46cf-b141-de64ea1e12de", data: undefined }, { source: "ff104b34-c13b-4677-8b82-af70d3548e12", target: "2dc627dc-1c06-45b1-b14f-bb1f6e689f83", data: undefined }, { source: "a73d7b3e-0ed7-46cf-b141-de64ea1e12de", target: "ff104b34-c13b-4677-8b82-af70d3548e12", data: undefined } ]} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to reindex data to keep your vectorstore in-sync with the underlying data source ](/v0.2/docs/how_to/indexing)[ Next How to get log probabilities ](/v0.2/docs/how_to/logprobs) * [Invoke a runnable](#invoke-a-runnable) * [Batch a runnable](#batch-a-runnable) * [Stream a runnable](#stream-a-runnable) * [Compose runnables](#compose-runnables) * [Invoke runnables in parallel](#invoke-runnables-in-parallel) * [Turn a function into a runnable](#turn-a-function-into-a-runnable) * [Merge input and output dicts](#merge-input-and-output-dicts) * [Include input dict in output dict](#include-input-dict-in-output-dict) * [Add default invocation args](#add-default-invocation-args) * [Add fallbacks](#add-fallbacks) * [Add retries](#add-retries) * [Configure runnable execution](#configure-runnable-execution) * [Add default config to runnable](#add-default-config-to-runnable) * [Build a chain dynamically based on input](#build-a-chain-dynamically-based-on-input) * [Generate a stream of internal events](#generate-a-stream-of-internal-events) * [Return a subset of keys from output object](#return-a-subset-of-keys-from-output-object) * [Declaratively make a batched version of a runnable](#declaratively-make-a-batched-version-of-a-runnable) * [Get a graph representation of a runnable](#get-a-graph-representation-of-a-runnable)
null
https://js.langchain.com/v0.2/docs/how_to/chat_model_caching
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to cache chat model responses On this page How to cache chat model responses ================================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [LLMs](/v0.2/docs/concepts/#llms) LangChain provides an optional caching layer for chat models. This is useful for two reasons: It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider. import { ChatOpenAI } from "@langchain/openai";// To make the caching really obvious, lets use a slower model.const model = new ChatOpenAI({ model: "gpt-4", cache: true,}); In Memory Cache[​](#in-memory-cache "Direct link to In Memory Cache") --------------------------------------------------------------------- The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared. console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a joke!");console.log(res);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 2.224s*/ console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke!");console.log(res2);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 181.98ms*/ Caching with Redis[​](#caching-with-redis "Direct link to Caching with Redis") ------------------------------------------------------------------------------ LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package: * npm * Yarn * pnpm npm install ioredis @langchain/community yarn add ioredis @langchain/community pnpm add ioredis @langchain/community Then, you can pass a `cache` option when you instantiate the LLM. For example: import { ChatOpenAI } from "@langchain/openai";import { Redis } from "ioredis";import { RedisCache } from "@langchain/community/caches/ioredis";const client = new Redis("redis://localhost:6379");const cache = new RedisCache(client, { ttl: 60, // Optional key expiration value});const model = new ChatOpenAI({ cache });const response1 = await model.invoke("Do something random!");console.log(response1);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/const response2 = await model.invoke("Do something random!");console.log(response2);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/await client.disconnect(); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [RedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_ioredis.RedisCache.html) from `@langchain/community/caches/ioredis` Caching on the File System[​](#caching-on-the-file-system "Direct link to Caching on the File System") ------------------------------------------------------------------------------------------------------ danger This cache is not recommended for production use. It is only intended for local development. LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want. const cache = await LocalFileCache.create(); Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to cache model responses to save time and money. Next, check out the other how-to guides on chat models, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to create your own custom chat model](/v0.2/docs/how_to/custom_chat). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to cache model responses ](/v0.2/docs/how_to/llm_caching)[ Next How to create a custom LLM class ](/v0.2/docs/how_to/custom_llm) * [In Memory Cache](#in-memory-cache) * [Caching with Redis](#caching-with-redis) * [Caching on the File System](#caching-on-the-file-system) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/custom_llm
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create a custom LLM class On this page How to create a custom LLM class ================================ Prerequisites This guide assumes familiarity with the following concepts: * [LLMs](/v0.2/docs/concepts/#llms) This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is directly supported in LangChain. There are a few required things that a custom LLM needs to implement after extending the [`LLM` class](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.LLM.html): * A `_call` method that takes in a string and call options (which includes things like `stop` sequences), and returns a string. * A `_llmType` method that returns a string. Used for logging purposes only. You can also implement the following optional method: * A `_streamResponseChunks` method that returns an `AsyncIterator` and yields [`GenerationChunks`](https://v02.api.js.langchain.com/classes/langchain_core_outputs.GenerationChunk.html). This allows the LLM to support streaming outputs. Let’s implement a very simple custom LLM that just echoes back the first `n` characters of the input. import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";import type { CallbackManagerForLLMRun } from "langchain/callbacks";import { GenerationChunk } from "langchain/schema";export interface CustomLLMInput extends BaseLLMParams { n: number;}export class CustomLLM extends LLM { n: number; constructor(fields: CustomLLMInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( prompt: string, options: this["ParsedCallOptions"], runManager: CallbackManagerForLLMRun ): Promise<string> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); return prompt.slice(0, this.n); } async *_streamResponseChunks( prompt: string, options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<GenerationChunk> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of prompt.slice(0, this.n)) { yield new GenerationChunk({ text: letter, }); // Trigger the appropriate callback await runManager?.handleLLMNewToken(letter); } }} We can now use this as any other LLM: const llm = new CustomLLM({ n: 4 });await llm.invoke("I am an LLM"); I am And support streaming: const stream = await llm.stream("I am an LLM");for await (const chunk of stream) { console.log(chunk);} Iam Richer outputs[​](#richer-outputs "Direct link to Richer outputs") ------------------------------------------------------------------ If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseLLM`](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html) class and implement the lower level `_generate` method. Rather than taking a single string as input and a single string output, it can take multiple input strings and map each to multiple string outputs. Additionally, it returns a `Generation` output with fields for additional metadata rather than just a string. import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { LLMResult } from "@langchain/core/outputs";import { BaseLLM, BaseLLMCallOptions, BaseLLMParams,} from "@langchain/core/language_models/llms";export interface AdvancedCustomLLMCallOptions extends BaseLLMCallOptions {}export interface AdvancedCustomLLMParams extends BaseLLMParams { n: number;}export class AdvancedCustomLLM extends BaseLLM<AdvancedCustomLLMCallOptions> { n: number; constructor(fields: AdvancedCustomLLMParams) { super(fields); this.n = fields.n; } _llmType() { return "advanced_custom_llm"; } async _generate( inputs: string[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<LLMResult> { const outputs = inputs.map((input) => input.slice(0, this.n)); // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); // One input could generate multiple outputs. const generations = outputs.map((output) => [ { text: output, // Optional additional metadata for the generation generationInfo: { outputCount: 1 }, }, ]); const tokenUsage = { usedTokens: this.n, }; return { generations, llmOutput: { tokenUsage }, }; }} This will pass the additional returned information in callback events and in the \`streamEvents method: const llm = new AdvancedCustomLLM({ n: 4 });const eventStream = await llm.streamEvents("I am an LLM", { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }} { "event": "on_llm_end", "name": "AdvancedCustomLLM", "run_id": "a883a705-c651-4236-8095-cb515e2d4885", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "text": "I am", "generationInfo": { "outputCount": 1 } } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to cache chat model responses ](/v0.2/docs/how_to/chat_model_caching)[ Next How to use few shot examples ](/v0.2/docs/how_to/few_shot_examples) * [Richer outputs](#richer-outputs)
null
https://js.langchain.com/v0.2/docs/how_to/logprobs
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to get log probabilities On this page How to get log probabilities ============================ Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) Certain chat models can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain. OpenAI[​](#openai "Direct link to OpenAI") ------------------------------------------ Install the `@langchain/openai` package and set your API key: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai For the OpenAI API to return log probabilities, we need to set the `logprobs` param to `true`. Then, the logprobs are included on each output [`AIMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) as part of the `response_metadata`: import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o", logprobs: true,});const responseMessage = await model.invoke("how are you today?");responseMessage.response_metadata.logprobs.content.slice(0, 5); [ { token: "Thank", logprob: -0.70174205, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000013856493, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }, { token: "!", logprob: -0.00030102333, bytes: [ 33 ], top_logprobs: [] }] And are part of streamed Message chunks as well: let count = 0;const stream = await model.stream("How are you today?");let aggregateResponse;for await (const chunk of stream) { if (count > 5) { break; } if (aggregateResponse === undefined) { aggregateResponse = chunk; } else { aggregateResponse = aggregateResponse.concat(chunk); } console.log(aggregateResponse.response_metadata.logprobs?.content); count++;} [][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000029352968, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000029352968, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }, { token: "!", logprob: -0.00039694557, bytes: [ 33 ], top_logprobs: [] }] `topLogprobs`[​](#toplogprobs "Direct link to toplogprobs") ----------------------------------------------------------- To see alternate potential generations at each step, you can use the `topLogprobs` parameter: const model = new ChatOpenAI({ model: "gpt-4o", logprobs: true, topLogprobs: 3,});const responseMessage = await model.invoke("how are you today?");responseMessage.response_metadata.logprobs.content.slice(0, 5); [ { token: "I'm", logprob: -2.2864406, bytes: [ 73, 39, 109 ], top_logprobs: [ { token: "Thank", logprob: -0.28644064, bytes: [ 84, 104, 97, 110, 107 ] }, { token: "Hello", logprob: -2.0364406, bytes: [ 72, 101, 108, 108, 111 ] }, { token: "I'm", logprob: -2.2864406, bytes: [ 73, 39, 109 ] } ] }, { token: " just", logprob: -0.14442946, bytes: [ 32, 106, 117, 115, 116 ], top_logprobs: [ { token: " just", logprob: -0.14442946, bytes: [ 32, 106, 117, 115, 116 ] }, { token: " an", logprob: -2.2694294, bytes: [ 32, 97, 110 ] }, { token: " here", logprob: -4.0194297, bytes: [ 32, 104, 101, 114, 101 ] } ] }, { token: " a", logprob: -0.00066632946, bytes: [ 32, 97 ], top_logprobs: [ { token: " a", logprob: -0.00066632946, bytes: [ 32, 97 ] }, { token: " lines", logprob: -7.750666, bytes: [ 32, 108, 105, 110, 101, 115 ] }, { token: " an", logprob: -9.250667, bytes: [ 32, 97, 110 ] } ] }, { token: " computer", logprob: -0.015423919, bytes: [ 32, 99, 111, 109, 112, 117, 116, 101, 114 ], top_logprobs: [ { token: " computer", logprob: -0.015423919, bytes: [ 32, 99, 111, 109, 112, 117, 116, 101, 114 ] }, { token: " program", logprob: -5.265424, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ] }, { token: " machine", logprob: -5.390424, bytes: [ 32, 109, 97, 99, 104, 105, 110, 101 ] } ] }, { token: " program", logprob: -0.0010724656, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ], top_logprobs: [ { token: " program", logprob: -0.0010724656, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ] }, { token: "-based", logprob: -6.8760724, bytes: [ 45, 98, 97, 115, 101, 100 ] }, { token: " algorithm", logprob: -10.626073, bytes: [ 32, 97, 108, 103, 111, 114, 105, 116, 104, 109 ] } ] }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to get logprobs from OpenAI models in LangChain. Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous LangChain Expression Language Cheatsheet ](/v0.2/docs/how_to/lcel_cheatsheet)[ Next How to merge consecutive messages of the same type ](/v0.2/docs/how_to/merge_message_runs) * [OpenAI](#openai) * [`topLogprobs`](#toplogprobs) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/llm_caching
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to cache model responses On this page How to cache model responses ============================ LangChain provides an optional caching layer for LLMs. This is useful for two reasons: It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", cache: true,}); In Memory Cache[​](#in-memory-cache "Direct link to In Memory Cache") --------------------------------------------------------------------- The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared. console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a long joke");console.log(res);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 4.187s*/ console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke");console.log(res2);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 175.74ms*/ Caching with Momento[​](#caching-with-momento "Direct link to Caching with Momento") ------------------------------------------------------------------------------------ LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package. To install for **Node.js**: * npm * Yarn * pnpm npm install @gomomento/sdk yarn add @gomomento/sdk pnpm add @gomomento/sdk To install for **browser/edge workers**: * npm * Yarn * pnpm npm install @gomomento/sdk-web yarn add @gomomento/sdk-web pnpm add @gomomento/sdk-web Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this: import { OpenAI } from "@langchain/openai";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { MomentoCache } from "@langchain/community/caches/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache }); #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [MomentoCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_momento.MomentoCache.html) from `@langchain/community/caches/momento` Caching with Redis[​](#caching-with-redis "Direct link to Caching with Redis") ------------------------------------------------------------------------------ LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package: * npm * Yarn * pnpm npm install ioredis yarn add ioredis pnpm add ioredis Then, you can pass a `cache` option when you instantiate the LLM. For example: import { OpenAI } from "@langchain/openai";import { RedisCache } from "@langchain/community/caches/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache }); Caching with Upstash Redis[​](#caching-with-upstash-redis "Direct link to Caching with Upstash Redis") ------------------------------------------------------------------------------------------------------ LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package: * npm * Yarn * pnpm npm install @upstash/redis yarn add @upstash/redis pnpm add @upstash/redis You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token. Then, you can pass a `cache` option when you instantiate the LLM. For example: import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache }); #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [UpstashRedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis` You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance: import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache }); #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [UpstashRedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis` Caching with Cloudflare KV[​](#caching-with-cloudflare-kv "Direct link to Caching with Cloudflare KV") ------------------------------------------------------------------------------------------------------ info This integration is only supported in Cloudflare Workers. If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache. For information on how to set up KV in Cloudflare, see [the official documentation](https://developers.cloudflare.com/kv/). **Note:** If you are using TypeScript, you may need to install types if they aren't already present: * npm * Yarn * pnpm npm install -S @cloudflare/workers-types yarn add @cloudflare/workers-types pnpm add @cloudflare/workers-types import type { KVNamespace } from "@cloudflare/workers-types";import { OpenAI } from "@langchain/openai";import { CloudflareKVCache } from "@langchain/cloudflare";export interface Env { KV_NAMESPACE: KVNamespace; OPENAI_API_KEY: string;}export default { async fetch(_request: Request, env: Env) { try { const cache = new CloudflareKVCache(env.KV_NAMESPACE); const model = new OpenAI({ cache, model: "gpt-3.5-turbo-instruct", apiKey: env.OPENAI_API_KEY, }); const response = await model.invoke("How are you today?"); return new Response(JSON.stringify(response), { headers: { "content-type": "application/json" }, }); } catch (err: any) { console.log(err.message); return new Response(err.message, { status: 500 }); } },}; #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [CloudflareKVCache](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareKVCache.html) from `@langchain/cloudflare` Caching on the File System[​](#caching-on-the-file-system "Direct link to Caching on the File System") ------------------------------------------------------------------------------------------------------ danger This cache is not recommended for production use. It is only intended for local development. LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want. const cache = await LocalFileCache.create(); Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to cache model responses to save time and money. Next, check out the other how-to guides on LLMs, like [how to create your own custom LLM class](/v0.2/docs/how_to/custom_llm). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use few shot examples in chat models ](/v0.2/docs/how_to/few_shot_examples_chat)[ Next How to cache chat model responses ](/v0.2/docs/how_to/chat_model_caching) * [In Memory Cache](#in-memory-cache) * [Caching with Momento](#caching-with-momento) * [Caching with Redis](#caching-with-redis) * [Caching with Upstash Redis](#caching-with-upstash-redis) * [Caching with Cloudflare KV](#caching-with-cloudflare-kv) * [Caching on the File System](#caching-on-the-file-system) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/indexing
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to reindex data to keep your vectorstore in-sync with the underlying data source On this page How to reindex data to keep your vectorstore in-sync with the underlying data source ==================================================================================== Prerequisites This guide assumes familiarity with the following concepts: * [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag/) * [Vector stores](/v0.2/docs/concepts/#vectorstores) Here, we will look at a basic indexing workflow using the LangChain indexing API. The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps: * Avoid writing duplicated content into the vector store * Avoid re-writing unchanged content * Avoid re-computing embeddings over unchanged content All of which should save you time and money, as well as improve your vector search results. Crucially, the indexing API will work even with documents that have gone through several transformation steps (e.g., via text chunking) with respect to the original source documents. How it works[​](#how-it-works "Direct link to How it works") ------------------------------------------------------------ LangChain indexing makes use of a record manager (`RecordManager`) that keeps track of document writes into the vector store. When indexing content, hashes are computed for each document, and the following information is stored in the record manager: * the document hash (hash of both page content and metadata) * write time * the source ID - each document should include information in its metadata to allow us to determine the ultimate source of this document Deletion Modes[​](#deletion-modes "Direct link to Deletion Modes") ------------------------------------------------------------------ When indexing documents into a vector store, it's possible that some existing documents in the vector store should be deleted. In certain situations you may want to remove any existing documents that are derived from the same sources as the new documents being indexed. In others you may want to delete all existing documents wholesale. The indexing API deletion modes let you pick the behavior you want: Cleanup Mode De-Duplicates Content Parallelizable Cleans Up Deleted Source Docs Cleans Up Mutations of Source Docs and/or Derived Docs Clean Up Timing None ✅ ✅ ❌ ❌ \- Incremental ✅ ✅ ❌ ✅ Continuously Full ✅ ❌ ✅ ✅ At end of indexing `None` does not do any automatic clean up, allowing the user to manually do clean up of old content. `incremental` and `full` offer the following automated clean up: * If the content of the source document or derived documents has changed, both `incremental` or `full` modes will clean up (delete) previous versions of the content. * If the source document has been deleted (meaning it is not included in the documents currently being indexed), the full cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not. When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted. * `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes. * `full` mode does the clean up after all batches have been written. Requirements[​](#requirements "Direct link to Requirements") ------------------------------------------------------------ 1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously. 2. Only works with LangChain `vectorstore`'s that support: a). document addition by id (`addDocuments` method with ids argument) b). delete by id (delete method with ids argument) Compatible Vectorstores: [`PGVector`](/v0.2/docs/integrations/vectorstores/pgvector), [`Chroma`](/v0.2/docs/integrations/vectorstores/chroma), [`CloudflareVectorize`](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize), [`ElasticVectorSearch`](/v0.2/docs/integrations/vectorstores/elasticsearch), [`FAISS`](/v0.2/docs/integrations/vectorstores/faiss), [`MomentoVectorIndex`](/v0.2/docs/integrations/vectorstores/momento_vector_index), [`Pinecone`](/v0.2/docs/integrations/vectorstores/pinecone), [`SupabaseVectorStore`](/v0.2/docs/integrations/vectorstores/supabase), [`VercelPostgresVectorStore`](/v0.2/docs/integrations/vectorstores/vercel_postgres), [`Weaviate`](/v0.2/docs/integrations/vectorstores/weaviate), [`Xata`](/v0.2/docs/integrations/vectorstores/xata) Caution[​](#caution "Direct link to Caution") --------------------------------------------- The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes). If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content. This is unlikely to be an issue in actual settings for the following reasons: 1. The `RecordManager` uses higher resolution timestamps. 2. The data would need to change between the first and the second tasks runs, which becomes unlikely if the time interval between the tasks is small. 3. Indexing tasks typically take more than a few ms. Quickstart[​](#quickstart "Direct link to Quickstart") ------------------------------------------------------ import { PostgresRecordManager } from "@langchain/community/indexes/postgres";import { index } from "langchain/indexes";import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";import { PoolConfig } from "pg";import { OpenAIEmbeddings } from "@langchain/openai";import { CharacterTextSplitter } from "@langchain/textsplitters";import { BaseDocumentLoader } from "@langchain/core/document_loaders/base";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst config = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "testlangchain", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },};const vectorStore = await PGVectorStore.initialize( new OpenAIEmbeddings(), config);// Create a new record managerconst recordManagerConfig = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "upsertion_records",};const recordManager = new PostgresRecordManager( "test_namespace", recordManagerConfig);// Create the schema if it doesn't existawait recordManager.createSchema();// Index some documentsconst doc1 = { pageContent: "kitty", metadata: { source: "kitty.txt" },};const doc2 = { pageContent: "doggy", metadata: { source: "doggy.txt" },};/** * Hacky helper method to clear content. See the `full` mode section to to understand why it works. */async function clear() { await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, });}// No cleanupawait clear();// This mode does not do automatic clean up of old versions of content; however, it still takes care of content de-duplication.console.log( await index({ docsSource: [doc1, doc1, doc1, doc1, doc1, doc1], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/await clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Second time around all content will be skippedconsole.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// Updated content will be added, but old won't be deletedconst doc1Updated = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [doc1Updated, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 1, }*//*Resulting records in the database: [ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, { pageContent: "kitty updated", metadata: { source: "kitty.txt" }, } ]*/// Incremental modeawait clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Indexing again should result in both documents getting skipped – also skipping the embedding operation!console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// If we provide no documents with incremental indexing mode, nothing will change.console.log( await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// If we mutate a document, the new version will be written and all old versions sharing the same source will be deleted.// This only affects the documents with the same source id!const changedDoc1 = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [changedDoc1], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 1, numSkipped: 0, }*/// Full modeawait clear();// In full mode the user should pass the full universe of content that should be indexed into the indexing function.// Any documents that are not passed into the indexing function and are present in the vectorStore will be deleted!// This behavior is useful to handle deletions of source documents.const allDocs = [doc1, doc2];console.log( await index({ docsSource: allDocs, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Say someone deleted the first doc:const doc2Only = [doc2];// Using full mode will clean up the deleted content as well.// This afffects all documents regardless of source id!console.log( await index({ docsSource: doc2Only, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 1, numSkipped: 1, }*/await clear();const newDoc1 = { pageContent: "kitty kitty kitty kitty kitty", metadata: { source: "kitty.txt" },};const newDoc2 = { pageContent: "doggy doggy the doggy", metadata: { source: "doggy.txt" },};const splitter = new CharacterTextSplitter({ separator: "t", keepSeparator: true, chunkSize: 12, chunkOverlap: 2,});const newDocs = await splitter.splitDocuments([newDoc1, newDoc2]);console.log(newDocs);/*[ { pageContent: 'kitty kit', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty ki', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty', metadata: {source: 'kitty.txt'}, }, { pageContent: 'doggy doggy', metadata: {source: 'doggy.txt'}, { pageContent: 'the doggy', metadata: {source: 'doggy.txt'}, }]*/console.log( await index({ docsSource: newDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 5, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/const changedDoggyDocs = [ { pageContent: "woof woof", metadata: { source: "doggy.txt" }, }, { pageContent: "woof woof woof", metadata: { source: "doggy.txt" }, },];console.log( await index({ docsSource: changedDoggyDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 2, numSkipped: 0,}*/// Usage with document loaders// Create a document loaderclass MyCustomDocumentLoader extends BaseDocumentLoader { load() { return Promise.resolve([ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, ]); }}await clear();const loader = new MyCustomDocumentLoader();console.log( await index({ docsSource: loader, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/// Closing resourcesawait recordManager.end();await vectorStore.end(); #### API Reference: * [PostgresRecordManager](https://v02.api.js.langchain.com/classes/langchain_community_indexes_postgres.PostgresRecordManager.html) from `@langchain/community/indexes/postgres` * index from `langchain/indexes` * [PGVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `@langchain/textsplitters` * [BaseDocumentLoader](https://v02.api.js.langchain.com/classes/langchain_core_document_loaders_base.BaseDocumentLoader.html) from `@langchain/core/document_loaders/base` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to use indexing in your RAG pipelines. Next, check out some of the other sections on retrieval. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add a semantic layer over the database ](/v0.2/docs/how_to/graph_semantic)[ Next LangChain Expression Language Cheatsheet ](/v0.2/docs/how_to/lcel_cheatsheet) * [How it works](#how-it-works) * [Deletion Modes](#deletion-modes) * [Requirements](#requirements) * [Caution](#caution) * [Quickstart](#quickstart) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/few_shot_examples_chat
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use few shot examples in chat models On this page How to use few shot examples in chat models =========================================== This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit. The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model. **Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/v0.2/docs/concepts/#message-types) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/v0.2/docs/how_to/few_shot_examples/) guide. Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Example selectors](/v0.2/docs/concepts/#example-selectors) * [Chat models](/v0.2/docs/concepts/#chat-model) * [Vectorstores](/v0.2/docs/concepts/#vectorstores) Fixed Examples[​](#fixed-examples "Direct link to Fixed Examples") ------------------------------------------------------------------ The most basic (and common) few-shot prompting technique is to use fixed prompt examples. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production. The basic components of the template are: - `examples`: An array of object examples to include in the final prompt. - `examplePrompt`: converts each example into 1 or more messages through its [`formatMessages`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html#formatMessages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message. Below is a simple demonstration. First, define the examples you’d like to include: import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "@langchain/core/prompts";const examples = [ { input: "2+2", output: "4" }, { input: "2+3", output: "5" },]; Next, assemble them into the few-shot prompt template. // This is a prompt template used to format each individual example.const examplePrompt = ChatPromptTemplate.fromMessages([ ["human", "{input}"], ["ai", "{output}"],]);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables});const result = await fewShotPrompt.invoke({});console.log(result.toChatMessages()); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }] Finally, we assemble the final prompt as shown below, passing `fewShotPrompt` directly into the `fromMessages` factory method, and use it with a model: const finalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"],]); ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); const chain = finalPrompt.pipe(model);await chain.invoke({ input: "What's the square of a triangle?" }); AIMessage { lc_serializable: true, lc_kwargs: { content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 23, promptTokens: 52, totalTokens: 75 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []} Dynamic few-shot prompting[​](#dynamic-few-shot-prompting "Direct link to Dynamic few-shot prompting") ------------------------------------------------------------------------------------------------------ Sometimes you may want to select only a few examples from your overall set to show based on the input. For this, you can replace the `examples` passed into `FewShotChatMessagePromptTemplate` with an `exampleSelector`. The other components remain the same as above! Our dynamic few-shot prompt template would look like: * `exampleSelector`: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the [BaseExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.BaseExampleSelector.html) interface. A common example is the vectorstore-backed [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) * `examplePrompt`: convert each example into 1 or more messages through its [`formatMessages`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html#formatMessages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message. These once again can be composed with other messages and chat templates to assemble your final prompt. Let’s walk through an example with the `SemanticSimilarityExampleSelector`. Since this implementation uses a vectorstore to select examples based on semantic similarity, we will want to first populate the store. Since the basic idea here is that we want to search for and return examples most similar to the text input, we embed the `values` of our prompt examples rather than considering the keys: import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const examples = [ { input: "2+2", output: "4" }, { input: "2+3", output: "5" }, { input: "2+4", output: "6" }, { input: "What did the cow say to the moon?", output: "nothing at all" }, { input: "Write me a poem about the moon", output: "One for the moon, and one for me, who are we to talk about the moon?", },];const toVectorize = examples.map( (example) => `${example.input} ${example.output}`);const embeddings = new OpenAIEmbeddings();const vectorStore = await MemoryVectorStore.fromTexts( toVectorize, examples, embeddings); ### Create the `exampleSelector`[​](#create-the-exampleselector "Direct link to create-the-exampleselector") With a vectorstore created, we can create the `exampleSelector`. Here we will call it in isolation, and set `k` on it to only fetch the two example closest to the input. const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore, k: 2,});// The prompt template will load examples by passing the input do the `select_examples` methodawait exampleSelector.selectExamples({ input: "horse" }); [ { input: "What did the cow say to the moon?", output: "nothing at all" }, { input: "2+4", output: "6" }] ### Create prompt template[​](#create-prompt-template "Direct link to Create prompt template") We now assemble the prompt template, using the `exampleSelector` created above. import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "@langchain/core/prompts";// Define the few-shot prompt.const fewShotPrompt = new FewShotChatMessagePromptTemplate({ // The input variables select the values to pass to the example_selector inputVariables: ["input"], exampleSelector, // Define how ech example will be formatted. // In this case, each example will become 2 messages: // 1 human, and 1 AI examplePrompt: ChatPromptTemplate.fromMessages([ ["human", "{input}"], ["ai", "{output}"], ]),});const results = await fewShotPrompt.invoke({ input: "What's 3+3?" });console.log(results.toChatMessages()); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }] And we can pass this few-shot chat message prompt template into another chat prompt template: const finalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"],]);const result = await fewShotPrompt.invoke({ input: "What's 3+3?" });console.log(result); ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ]} ### Use with an chat model[​](#use-with-an-chat-model "Direct link to Use with an chat model") Finally, you can connect your model to the few-shot prompt. ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); const chain = finalPrompt.pipe(model);await chain.invoke({ input: "What's 3+3?" }); AIMessage { lc_serializable: true, lc_kwargs: { content: "6", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "6", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 1, promptTokens: 51, totalTokens: 52 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to add few-shot examples to your chat prompts. Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with text completion models](/v0.2/docs/how_to/few_shot_examples), or the other [example selector how-to guides](/v0.2/docs/how_to/example_selectors/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to embed text data ](/v0.2/docs/how_to/embed_text)[ Next How to cache model responses ](/v0.2/docs/how_to/llm_caching) * [Fixed Examples](#fixed-examples) * [Dynamic few-shot prompting](#dynamic-few-shot-prompting) * [Create the `exampleSelector`](#create-the-exampleselector) * [Create prompt template](#create-prompt-template) * [Use with an chat model](#use-with-an-chat-model) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.1/docs/get_started/introduction/
* [](/v0.1/) * [Get started](/v0.1/docs/get_started/) * Introduction On this page Introduction ============ **LangChain** is a framework for developing applications powered by language models. It enables applications that: * **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.) * **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.) This framework consists of several parts. * **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents. * **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. (_Python only_) * **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API. (_Python only_) * **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain. ![LangChain Diagram](/v0.1/assets/images/langchain_stack_feb_2024-101939844004a99c1b676723fc0ee5e9.webp) Together, these products simplify the entire application lifecycle: * **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference. * **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence. * **Deploy**: Turn any chain into an API with LangServe. LangChain Libraries[​](#langchain-libraries "Direct link to LangChain Libraries") --------------------------------------------------------------------------------- The main value props of the LangChain packages are: 1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not 2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- [Here's](/v0.1/docs/get_started/installation/) how to install LangChain, set up your environment, and start building. We recommend following our [Quickstart](/v0.1/docs/get_started/quickstart/) guide to familiarize yourself with the framework by building your first LangChain application. Read up on our [Security](/v0.1/docs/security/) best practices to make sure you're developing safely with LangChain. note These docs focus on the JS/TS LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library. LangChain Expression Language (LCEL)[​](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)") ---------------------------------------------------------------------------------------------------------------------------------- LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains. * **[Overview](/v0.1/docs/expression_language/)**: LCEL and its benefits * **[Interface](/v0.1/docs/expression_language/interface/)**: The standard interface for LCEL objects * **[How-to](/v0.1/docs/expression_language/how_to/routing/)**: Key features of LCEL * **[Cookbook](/v0.1/docs/expression_language/cookbook/)**: Example code for accomplishing common tasks Modules[​](#modules "Direct link to Modules") --------------------------------------------- LangChain provides standard, extendable interfaces and integrations for the following modules: #### [Model I/O](/v0.1/docs/modules/model_io/)[​](#model-io "Direct link to model-io") Interface with language models #### [Retrieval](/v0.1/docs/modules/data_connection/)[​](#retrieval "Direct link to retrieval") Interface with application-specific data #### [Agents](/v0.1/docs/modules/agents/)[​](#agents "Direct link to agents") Let models choose which tools to use given high-level directives Examples, ecosystem, and resources[​](#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources") ---------------------------------------------------------------------------------------------------------------------------- ### [Use cases](/v0.1/docs/use_cases/)[​](#use-cases "Direct link to use-cases") Walkthroughs and techniques for common end-to-end use cases, like: * [Document question answering](/v0.1/docs/use_cases/question_answering/) * [RAG](/v0.1/docs/use_cases/question_answering/) * [Agents](/v0.1/docs/use_cases/autonomous_agents/) * and much more... ### [Integrations](/v0.1/docs/integrations/platforms/)[​](#integrations "Direct link to integrations") LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.1/docs/integrations/platforms/). ### [API reference](https://api.js.langchain.com)[​](#api-reference "Direct link to api-reference") Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental packages. ### [Developer's guide](/v0.1/docs/contributing/)[​](#developers-guide "Direct link to developers-guide") Check out the developer's guide for guidelines on contributing and help getting your dev environment set up. ### [Community](/v0.1/docs/community/)[​](#community "Direct link to community") Head to the [Community navigator](/v0.1/docs/community/) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM's. * * * #### Help us out by providing feedback on this documentation page: [ Previous Get started ](/v0.1/docs/get_started/)[ Next Installation ](/v0.1/docs/get_started/installation/) * [LangChain Libraries](#langchain-libraries) * [Get started](#get-started) * [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel) * [Modules](#modules) * [Examples, ecosystem, and resources](#examples-ecosystem-and-resources) * [Use cases](#use-cases) * [Integrations](#integrations) * [API reference](#api-reference) * [Developer's guide](#developers-guide) * [Community](#community)
null
https://js.langchain.com/v0.2/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
null
https://js.langchain.com/v0.2/docs/how_to/merge_message_runs
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to merge consecutive messages of the same type On this page How to merge consecutive messages of the same type ================================================== The `mergeMessageRuns` function is available in `@langchain/core` version `0.2.8` and above. Certain models do not support passing in consecutive messages of the same type (a.k.a. “runs” of the same message type). The `mergeMessageRuns` utility makes it easy to merge consecutive messages of the same type. Basic usage[​](#basic-usage "Direct link to Basic usage") --------------------------------------------------------- import { HumanMessage, SystemMessage, AIMessage, mergeMessageRuns,} from "@langchain/core/messages";const messages = [ new SystemMessage("you're a good assistant."), new SystemMessage("you always respond with a joke."), new HumanMessage({ content: [{ type: "text", text: "i wonder why it's called langchain" }], }), new HumanMessage("and who is harrison chasing anyways"), new AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), new AIMessage( "Why, he's probably chasing after the last cup of coffee in the office!" ),];const merged = mergeMessageRuns(messages);console.log( merged .map((x) => JSON.stringify( { role: x._getType(), content: x.content, }, null, 2 ) ) .join("\n\n")); { "role": "system", "content": "you're a good assistant.\nyou always respond with a joke."}{ "role": "human", "content": [ { "type": "text", "text": "i wonder why it's called langchain" }, { "type": "text", "text": "and who is harrison chasing anyways" } ]}{ "role": "ai", "content": "Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn't have the same ring to it!\nWhy, he's probably chasing after the last cup of coffee in the office!"} Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character. Chaining[​](#chaining "Direct link to Chaining") ------------------------------------------------ `mergeMessageRuns` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain: import { ChatAnthropic } from "@langchain/anthropic";import { mergeMessageRuns } from "@langchain/core/messages";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});// Notice we don't pass in messages. This creates// a RunnableLambda that takes messages as inputconst merger = mergeMessageRuns();const chain = merger.pipe(llm);await chain.invoke(messages); AIMessage { lc_serializable: true, lc_kwargs: { content: [], additional_kwargs: { id: 'msg_01LsdS4bjQ3EznH7Tj4xujV1', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: [Object] }, tool_calls: [], usage_metadata: { input_tokens: 84, output_tokens: 3, total_tokens: 87 }, invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ 'langchain_core', 'messages' ], content: [], name: undefined, additional_kwargs: { id: 'msg_01LsdS4bjQ3EznH7Tj4xujV1', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 84, output_tokens: 3 } }, response_metadata: { id: 'msg_01LsdS4bjQ3EznH7Tj4xujV1', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 84, output_tokens: 3 } }, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: { input_tokens: 84, output_tokens: 3, total_tokens: 87 }} Looking at [the LangSmith trace](https://smith.langchain.com/public/48d256fb-fd7e-48a0-bdfd-217ab74ad01d/r) we can see that before the messages are passed to the model they are merged. Looking at just the merger, we can see that it’s a Runnable object that can be invoked like all Runnables: await merger.invoke(messages); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "you're a good assistant.\nyou always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, lc_namespace: [ 'langchain_core', 'messages' ], content: "you're a good assistant.\nyou always respond with a joke.", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, HumanMessage { lc_serializable: true, lc_kwargs: { content: [Array], name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, lc_namespace: [ 'langchain_core', 'messages' ], content: [ [Object], [Object] ], name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined }, AIMessage { lc_serializable: true, lc_kwargs: { content: `Well, I guess they thought "WordRope" and "SentenceString" just didn't have the same ring to it!\n` + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }, lc_namespace: [ 'langchain_core', 'messages' ], content: `Well, I guess they thought "WordRope" and "SentenceString" just didn't have the same ring to it!\n` + "Why, he's probably chasing after the last cup of coffee in the office!", name: undefined, additional_kwargs: {}, response_metadata: {}, id: undefined, tool_calls: [], invalid_tool_calls: [], usage_metadata: undefined }] API reference[​](#api-reference "Direct link to API reference") --------------------------------------------------------------- For a complete description of all arguments head to the [API reference](https://api.js.langchain.com/functions/langchain_core_messages.mergeMessageRuns.html). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to get log probabilities ](/v0.2/docs/how_to/logprobs)[ Next How to add message history ](/v0.2/docs/how_to/message_history) * [Basic usage](#basic-usage) * [Chaining](#chaining) * [API reference](#api-reference)
null
https://js.langchain.com/v0.2/docs/how_to/few_shot_examples
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use few shot examples On this page How to use few shot examples ============================ In this guide, we’ll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance. A few-shot prompt template can be constructed from either a set of examples, or from an [Example Selector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.BaseExampleSelector.html) class responsible for choosing a subset of examples from the defined set. This guide will cover few-shotting with string prompt templates. For a guide on few-shotting with chat messages for chat models, see [here](/v0.2/docs/how_to/few_shot_examples_chat/). Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Example selectors](/v0.2/docs/concepts/#example-selectors) * [LLMs](/v0.2/docs/concepts/#llms) * [Vectorstores](/v0.2/docs/concepts/#vectorstores) Create a formatter for the few-shot examples[​](#create-a-formatter-for-the-few-shot-examples "Direct link to Create a formatter for the few-shot examples") ------------------------------------------------------------------------------------------------------------------------------------------------------------ Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object. import { PromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "Question: {question}\n{answer}"); Creating the example set[​](#creating-the-example-set "Direct link to Creating the example set") ------------------------------------------------------------------------------------------------ Next, we’ll create a list of few-shot examples. Each example should be a dictionary representing an example input to the formatter prompt we defined above. const examples = [ { question: "Who lived longer, Muhammad Ali or Alan Turing?", answer: ` Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali `, }, { question: "When was the founder of craigslist born?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 `, }, { question: "Who was the maternal grandfather of George Washington?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball `, }, { question: "Are both the directors of Jaws and Casino Royale from the same country?", answer: ` Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No `, },]; ### Pass the examples and formatter to `FewShotPromptTemplate`[​](#pass-the-examples-and-formatter-to-fewshotprompttemplate "Direct link to pass-the-examples-and-formatter-to-fewshotprompttemplate") Finally, create a [`FewShotPromptTemplate`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) object. This object takes in the few-shot examples and the formatter for the few-shot examples. When this `FewShotPromptTemplate` is formatted, it formats the passed examples using the `examplePrompt`, then and adds them to the final prompt before `suffix`: import { FewShotPromptTemplate } from "@langchain/core/prompts";const prompt = new FewShotPromptTemplate({ examples, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"],});const formatted = await prompt.format({ input: "Who was the father of Mary Ball Washington?",});console.log(formatted.toString()); Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad AliQuestion: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph BallQuestion: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: NoQuestion: Who was the father of Mary Ball Washington? By providing the model with examples like this, we can guide the model to a better response. Using an example selector[​](#using-an-example-selector "Direct link to Using an example selector") --------------------------------------------------------------------------------------------------- We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the `FewShotPromptTemplate` object, we will feed them into an implementation of `ExampleSelector` called [`SemanticSimilarityExampleSelector`](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) instance. This class selects few-shot examples from the initial set based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search. To show what it looks like, let’s initialize an instance and call it in isolation: Set your OpenAI API key for the embeddings model export OPENAI_API_KEY="..." import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( // This is the list of examples available to select from. examples, // This is the embedding class used to produce embeddings which are used to measure semantic similarity. new OpenAIEmbeddings(), // This is the VectorStore class that is used to store the embeddings and do a similarity search over. MemoryVectorStore, { // This is the number of examples to produce. k: 1, });// Select the most similar example to the input.const question = "Who was the father of Mary Ball Washington?";const selectedExamples = await exampleSelector.selectExamples({ question });console.log(`Examples most similar to the input: ${question}`);for (const example of selectedExamples) { console.log("\n"); console.log( Object.entries(example) .map(([k, v]) => `${k}: ${v}`) .join("\n") );} Examples most similar to the input: Who was the father of Mary Ball Washington?question: Who was the maternal grandfather of George Washington?answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Now, let’s create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter prompt for the few-shot examples. const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"],});const formatted = await prompt.invoke({ input: "Who was the father of Mary Ball Washington?",});console.log(formatted.toString()); Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph BallQuestion: Who was the father of Mary Ball Washington? Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to add few-shot examples to your prompts. Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with chat models](/v0.2/docs/how_to/few_shot_examples_chat), or the other [example selector how-to guides](/v0.2/docs/how_to/example_selectors/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create a custom LLM class ](/v0.2/docs/how_to/custom_llm)[ Next How to use output parsers to parse an LLM response into structured format ](/v0.2/docs/how_to/output_parser_structured) * [Create a formatter for the few-shot examples](#create-a-formatter-for-the-few-shot-examples) * [Creating the example set](#creating-the-example-set) * [Pass the examples and formatter to `FewShotPromptTemplate`](#pass-the-examples-and-formatter-to-fewshotprompttemplate) * [Using an example selector](#using-an-example-selector) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/structured_output
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to return structured data from a model On this page How to return structured data from a model ========================================== It is often useful to have a model return output that matches some specific schema. One common use-case is extracting data from arbitrary text to insert into a traditional database or use with some other downstrem system. This guide will show you a few different strategies you can use to do this. Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) The `.withStructuredOutput()` method[​](#the-.withstructuredoutput-method "Direct link to the-.withstructuredoutput-method") ---------------------------------------------------------------------------------------------------------------------------- There are several strategies that models can use under the hood. For some of the most popular model providers, including [Anthropic](/v0.2/docs/integrations/platforms/anthropic/), [Google VertexAI](/v0.2/docs/integrations/platforms/google/), [Mistral](/v0.2/docs/integrations/chat/mistral/), and [OpenAI](/v0.2/docs/integrations/platforms/openai/) LangChain implements a common interface that abstracts away these strategies called `.withStructuredOutput`. By invoking this method (and passing in [JSON schema](https://json-schema.org/) or a [Zod schema](https://zod.dev/)) the model will add whatever model parameters + output parsers are necessary to get back structured output matching the requested schema. If the model supports more than one way to do this (e.g., function calling vs JSON mode) - you can configure which method to use by passing into that method. Let’s look at some examples of this in action! We’ll use Zod to create a simple response schema. ### Pick your chat model: * OpenAI * Anthropic * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { z } from "zod";const joke = z.object({ setup: z.string().describe("The setup of the joke"), punchline: z.string().describe("The punchline to the joke"), rating: z.number().optional().describe("How funny the joke is, from 1 to 10"),});const structuredLlm = model.withStructuredOutput(joke);await structuredLlm.invoke("Tell me a joke about cats"); { setup: "Why don't cats play poker in the wild?", punchline: "Too many cheetahs.", rating: 7} One key point is that though we set our Zod schema as a variable named `joke`, Zod is not able to access that variable name, and therefore cannot pass it to the model. Though it is not required, we can pass a name for our schema in order to give the model additional context as to what our schema represents, improving performance: const structuredLlm = model.withStructuredOutput(joke, { name: "joke" });await structuredLlm.invoke("Tell me a joke about cats"); { setup: "Why don't cats play poker in the wild?", punchline: "Too many cheetahs!", rating: 7} The result is a JSON object. We can also pass in an OpenAI-style JSON schema dict if you prefer not to use Zod. This object should contain three properties: * `name`: The name of the schema to output. * `description`: A high level description of the schema to output. * `parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict. In this case, the response is also a dict: const structuredLlm = model.withStructuredOutput({ name: "joke", description: "Joke to tell user.", parameters: { title: "Joke", type: "object", properties: { setup: { type: "string", description: "The setup for the joke" }, punchline: { type: "string", description: "The joke's punchline" }, }, required: ["setup", "punchline"], },});await structuredLlm.invoke("Tell me a joke about cats"); { setup: "Why was the cat sitting on the computer?", punchline: "Because it wanted to keep an eye on the mouse!"} If you are using JSON Schema, you can take advantage of other more complex schema descriptions to create a similar effect. You can also use tool calling directly to allow the model to choose between options, if your chosen model supports it. This involves a bit more parsing and setup. See [this how-to guide](/v0.2/docs/how_to/tool_calling/) for more details. ### Specifying the output method (Advanced)[​](#specifying-the-output-method-advanced "Direct link to Specifying the output method (Advanced)") For models that support more than one means of outputting data, you can specify the preferred one like this: const structuredLlm = model.withStructuredOutput(joke, { method: "json_mode", name: "joke",});await structuredLlm.invoke( "Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys"); { setup: "Why don't cats play poker in the jungle?", punchline: "Too many cheetahs!"} In the above example, we use OpenAI’s alternate JSON mode capability along with a more specific prompt. For specifics about the model you choose, peruse its entry in the [API reference pages](https://v02.api.js.langchain.com/). Prompting techniques[​](#prompting-techniques "Direct link to Prompting techniques") ------------------------------------------------------------------------------------ You can also prompt models to outputting information in a given format. This approach relies on designing good prompts and then parsing the output of the models. This is the only option for models that don’t support `.with_structured_output()` or other built-in approaches. ### Using `JsonOutputParser`[​](#using-jsonoutputparser "Direct link to using-jsonoutputparser") The following example uses the built-in [`JsonOutputParser`](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) to parse the output of a chat model prompted to match a the given JSON schema. Note that we are adding `format_instructions` directly to the prompt from a method on the parser: import { JsonOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";type Person = { name: string; height_in_meters: number;};type People = { people: Person[];};const formatInstructions = `Respond only in valid JSON. The JSON object you return should match the following schema:{{ people: [{{ name: "string", height_in_meters: "number" }}] }}Where people is an array of objects, each with a name and height_in_meters field.`;// Set up a parserconst parser = new JsonOutputParser<People>();// Promptconst prompt = await ChatPromptTemplate.fromMessages([ [ "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ], ["human", "{query}"],]).partial({ format_instructions: formatInstructions,}); Let’s take a look at what information is sent to the model: const query = "Anna is 23 years old and she is 6 feet tall";console.log((await prompt.format({ query })).toString()); System: Answer the user query. Wrap the output in `json` tagsRespond only in valid JSON. The JSON object you return should match the following schema:{{ people: [{{ name: "string", height_in_meters: "number" }}] }}Where people is an array of objects, each with a name and height_in_meters field.Human: Anna is 23 years old and she is 6 feet tall And now let’s invoke it: const chain = prompt.pipe(model).pipe(parser);await chain.invoke({ query }); { people: [ { name: "Anna", height_in_meters: 1.83 } ] } For a deeper dive into using output parsers with prompting techniques for structured output, see [this guide](/v0.2/docs/how_to/output_parser_structured). ### Custom Parsing[​](#custom-parsing "Direct link to Custom Parsing") You can also create a custom prompt and parser with [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language), using a plain function to parse the output from the model: import { AIMessage } from "@langchain/core/messages";import { ChatPromptTemplate } from "@langchain/core/prompts";type Person = { name: string; height_in_meters: number;};type People = { people: Person[];};const schema = `{{ people: [{{ name: "string", height_in_meters: "number" }}] }}`;// Promptconst prompt = await ChatPromptTemplate.fromMessages([ [ "system", `Answer the user query. Output your answer as JSON thatmatches the given schema: \`\`\`json\n{schema}\n\`\`\`.Make sure to wrap the answer in \`\`\`json and \`\`\` tags`, ], ["human", "{query}"],]).partial({ schema,});/** * Custom extractor * * Extracts JSON content from a string where * JSON is embedded between ```json and ``` tags. */const extractJson = (output: AIMessage): Array<People> => { const text = output.content as string; // Define the regular expression pattern to match JSON blocks const pattern = /```json(.*?)```/gs; // Find all non-overlapping matches of the pattern in the string const matches = text.match(pattern); // Process each match, attempting to parse it as JSON try { return ( matches?.map((match) => { // Remove the markdown code block syntax to isolate the JSON string const jsonStr = match.replace(/```json|```/g, "").trim(); return JSON.parse(jsonStr); }) ?? [] ); } catch (error) { throw new Error(`Failed to parse: ${output}`); }}; Here is the prompt sent to the model: const query = "Anna is 23 years old and she is 6 feet tall";console.log((await prompt.format({ query })).toString()); System: Answer the user query. Output your answer as JSON thatmatches the given schema: ```json{{ people: [{{ name: "string", height_in_meters: "number" }}] }}```.Make sure to wrap the answer in ```json and ``` tagsHuman: Anna is 23 years old and she is 6 feet tall And here’s what it looks like when we invoke it: import { RunnableLambda } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new RunnableLambda({ func: extractJson }));await chain.invoke({ query }); [ { people: [ { name: "Anna", height_in_meters: 1.83 } ] }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you’ve learned a few methods to make a model output structured data. To learn more, check out the other how-to guides in this section, or the conceptual guide on tool calling. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use output parsers to parse an LLM response into structured format ](/v0.2/docs/how_to/output_parser_structured)[ Next How to add ad-hoc tool calling capability to LLMs and Chat Models ](/v0.2/docs/how_to/tools_prompting) * [The `.withStructuredOutput()` method](#the-.withstructuredoutput-method) * [Specifying the output method (Advanced)](#specifying-the-output-method-advanced) * [Prompting techniques](#prompting-techniques) * [Using `JsonOutputParser`](#using-jsonoutputparser) * [Custom Parsing](#custom-parsing) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/integrations/platforms/
* [](/v0.2/) * Providers On this page Providers ========= LangChain integrates with many providers. Partner Packages[​](#partner-packages "Direct link to Partner Packages") ------------------------------------------------------------------------ These providers have standalone `@langchain/{provider}` packages for improved versioning, dependency management and testing. * [Anthropic](https://www.npmjs.com/package/@langchain/anthropic) * [Cloudflare](https://www.npmjs.com/package/@langchain/cloudflare) * [Cohere](https://www.npmjs.com/package/@langchain/cohere) * [Exa](https://www.npmjs.com/package/@langchain/exa) * [Google GenAI](https://www.npmjs.com/package/@langchain/google-genai) * [Google VertexAI](https://www.npmjs.com/package/@langchain/google-vertexai) * [Google VertexAI Web](https://www.npmjs.com/package/@langchain/google-vertexai-web) * [Groq](https://www.npmjs.com/package/@langchain/groq) * [MistralAI](https://www.npmjs.com/package/@langchain/mistralai) * [MongoDB](https://www.npmjs.com/package/@langchain/mongodb) * [Nomic](https://www.npmjs.com/package/@langchain/nomic) * [OpenAI](https://www.npmjs.com/package/@langchain/openai) * [Pinecone](https://www.npmjs.com/package/@langchain/pinecone) * [Qdrant](https://www.npmjs.com/package/@langchain/qdrant) * [Redis](https://www.npmjs.com/package/@langchain/redis) * [Weaviate](https://www.npmjs.com/package/@langchain/weaviate) * [Yandex](https://www.npmjs.com/package/@langchain/yandex) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Next Providers ](/v0.2/docs/integrations/platforms/) * [Partner Packages](#partner-packages)
null
https://js.langchain.com/v0.2/docs/how_to/tools_prompting
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add ad-hoc tool calling capability to LLMs and Chat Models On this page How to add ad-hoc tool calling capability to LLMs and Chat Models ================================================================= Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Tool calling](/v0.2/docs/how_to/tool_calling/) In this guide we’ll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/v0.2/docs/how_to/tool_calling)) and instead just prompts the model directly to invoke tools. Setup[​](#setup "Direct link to Setup") --------------------------------------- We’ll need to install the following packages: * npm * yarn * pnpm npm i @langchain/core zod yarn add @langchain/core zod pnpm add @langchain/core zod #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") # Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Create a tool[​](#create-a-tool "Direct link to Create a tool") --------------------------------------------------------------- First, we need to create a tool to call. For this example, we will create a custom tool from a function. For more information on all details related to creating custom tools, please see [this guide](/v0.2/docs/how_to/custom_tools). import { StructuredTool } from "@langchain/core/tools";import { z } from "zod";class Multiply extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "multiply"; description = "Multiply two integers together."; async _call(input: z.infer<typeof this.schema>) { return (input.first_int * input.second_int).toString(); }}const multiply = new Multiply(); console.log(multiply.name);console.log(multiply.description); multiplyMultiply two integers together. await multiply.invoke({ first_int: 4, second_int: 5 }); 20 Creating our prompt[​](#creating-our-prompt "Direct link to Creating our prompt") --------------------------------------------------------------------------------- We’ll want to write a prompt that specifies the tools the model has access to, the arguments to those tools, and the desired output format of the model. In this case we’ll instruct it to output a JSON blob of the form `{"name": "...", "arguments": {...}}`. import { renderTextDescription } from "langchain/tools/render";const renderedTools = renderTextDescription([multiply]); import { ChatPromptTemplate } from "@langchain/core/prompts";const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool:{rendered_tools}Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", systemPrompt], ["user", "{input}"],]); Adding an output parser[​](#adding-an-output-parser "Direct link to Adding an output parser") --------------------------------------------------------------------------------------------- We’ll use the `JsonOutputParser` for parsing our models output to JSON. ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { JsonOutputParser } from "@langchain/core/output_parsers";const chain = prompt.pipe(model).pipe(new JsonOutputParser());await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools,}); { name: 'multiply', arguments: [ 13, 4 ] } Invoking the tool[​](#invoking-the-tool "Direct link to Invoking the tool") --------------------------------------------------------------------------- We can invoke the tool as part of the chain by passing along the model-generated “arguments” to it: import { RunnableLambda, RunnablePick } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(new RunnablePick("arguments")) .pipe( new RunnableLambda({ func: (input) => multiply.invoke({ first_int: input[0], second_int: input[1], }), }) );await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools,}); 52 Choosing from multiple tools[​](#choosing-from-multiple-tools "Direct link to Choosing from multiple tools") ------------------------------------------------------------------------------------------------------------ Suppose we have multiple tools we want the chain to be able to choose from: class Add extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "add"; description = "Add two integers together."; async _call(input: z.infer<typeof this.schema>) { return (input.first_int + input.second_int).toString(); }}const add = new Add();class Exponentiate extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "exponentiate"; description = "Exponentiate the base to the exponent power."; async _call(input: z.infer<typeof this.schema>) { return Math.pow(input.first_int, input.second_int).toString(); }}const exponentiate = new Exponentiate(); With function calling, we can do this like so: If we want to run the model selected tool, we can do so using a function that returns the tool based on the model output. Specifically, our function will action return it’s own subchain that gets the “arguments” part of the model output and passes it to the chosen tool: import { StructuredToolInterface } from "@langchain/core/tools";const tools = [add, exponentiate, multiply];const toolChain = (modelOutput) => { const toolMap: Record<string, StructuredToolInterface> = Object.fromEntries( tools.map((tool) => [tool.name, tool]) ); const chosenTool = toolMap[modelOutput.name]; return new RunnablePick("arguments").pipe( new RunnableLambda({ func: (input) => chosenTool.invoke({ first_int: input[0], second_int: input[1], }), }) );};const toolChainRunnable = new RunnableLambda({ func: toolChain,});const renderedTools = renderTextDescription(tools);const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool:{rendered_tools}Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", systemPrompt], ["user", "{input}"],]);const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(toolChainRunnable);await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools,}); 1135 Returning tool inputs[​](#returning-tool-inputs "Direct link to Returning tool inputs") --------------------------------------------------------------------------------------- It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL by `RunnablePassthrough.assign`\-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everything that’s currently in the input: import { RunnablePassthrough } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(RunnablePassthrough.assign({ output: toolChainRunnable }));await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools,}); { name: 'add', arguments: [ 3, 1132 ], output: '1135' } What’s next?[​](#whats-next "Direct link to What’s next?") ---------------------------------------------------------- This how-to guide shows the “happy path” when the model correctly outputs all the required tool information. In reality, if you’re using more complex tools, you will start encountering errors from the model, especially for models that have not been fine tuned for tool calling and for less capable models. You will need to be prepared to add strategies to improve the output from the model; e.g., * Provide few shot examples. * Add error handling (e.g., catch the exception and feed it back to the LLM to ask it to correct its previous output). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to return structured data from a model ](/v0.2/docs/how_to/structured_output)[ Next How to create a custom chat model class ](/v0.2/docs/how_to/custom_chat) * [Setup](#setup) * [Create a tool](#create-a-tool) * [Creating our prompt](#creating-our-prompt) * [Adding an output parser](#adding-an-output-parser) * [Invoking the tool](#invoking-the-tool) * [Choosing from multiple tools](#choosing-from-multiple-tools) * [Returning tool inputs](#returning-tool-inputs) * [What’s next?](#whats-next)
null
https://js.langchain.com/v0.2/docs/how_to/output_parser_structured
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use output parsers to parse an LLM response into structured format On this page How to use output parsers to parse an LLM response into structured format ========================================================================= Prerequisites This guide assumes familiarity with the following concepts: * [Output parsers](/v0.2/docs/concepts#output-parsers) * [Chat models](/v0.2/docs/concepts#chat-models) Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support [built-in ways to return structured output](/v0.2/docs/how_to/structured_output), not all do. For these providers, you must use prompting to encourage the model to return structured data in the desired format. LangChain has [output parsers](/v0.2/docs/concepts#output-parsers) which can help parse model outputs into usable objects. We’ll go over a few examples below. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- The primary type of output parser for working with structured data in model responses is the [`StructuredOutputParser`](https://api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html). In the below example, we define a schema for the type of output we expect from the model using [`zod`](https://zod.dev). First, let’s see the default formatting instructions we’ll plug into the prompt: ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { z } from "zod";import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const zodSchema = z.object({ answer: z.string().describe("answer to the user's question"), source: z .string() .describe( "source used to answer the user's question, should be a website." ),});const parser = StructuredOutputParser.fromZodSchema(zodSchema);const chain = RunnableSequence.from([ ChatPromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), model, parser,]);console.log(parser.getFormatInstructions()); You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}``` Next, let’s invoke the chain: const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response); { answer: "The capital of France is Paris.", source: "https://en.wikipedia.org/wiki/Paris"} Output parsers implement the [Runnable interface](/v0.2/docs/how_to/#langchain-expression-language-lcel), the basic building block of the [LangChain Expression Language (LCEL)](/v0.2/docs/how_to/#langchain-expression-language-lcel). This means they support `invoke`, `stream`, `batch`, `streamLog` calls. Validation[​](#validation "Direct link to Validation") ------------------------------------------------------ One feature of the `StructuredOutputParser` is that it supports stricter Zod validations. For example, if you pass a simulated model output that does not conform to the schema, we get a detailed type error: import { AIMessage } from "@langchain/core/messages";await parser.invoke(new AIMessage(`{"badfield": "foo"}`)); Error: Failed to parse. Text: "{"badfield": "foo"}". Error: [ { "code": "invalid_type", "expected": "string", "received": "undefined", "path": [ "answer" ], "message": "Required" }, { "code": "invalid_type", "expected": "string", "received": "undefined", "path": [ "source" ], "message": "Required" }] Compared to: await parser.invoke( new AIMessage(`{"answer": "Paris", "source": "I made it up"}`)); { answer: "Paris", source: "I made it up" } More advanced Zod validations are supported as well. To learn more, check out the [Zod documentation](https://zod.dev). Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- While all parsers are runnables and support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. The `StructuredOutputParser` does not support partial streaming because it validates the output at each step. If you try to stream using a chain with this output parser, the chain will simply yield the fully parsed output: const stream = await chain.stream({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});for await (const s of stream) { console.log(s);} { answer: "The capital of France is Paris.", source: "https://en.wikipedia.org/wiki/Paris"} The simpler [`JsonOutputParser`](https://api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html), however, supports streaming through partial outputs: import { JsonOutputParser } from "@langchain/core/output_parsers";const template = `Return a JSON object with a single key named "answer" that answers the following question: {question}.Do not wrap the JSON output in markdown blocks.`;const jsonPrompt = ChatPromptTemplate.fromTemplate(template);const jsonParser = new JsonOutputParser();const jsonChain = jsonPrompt.pipe(model).pipe(jsonParser);const stream = await jsonChain.stream({ question: "Who invented the microscope?",});for await (const s of stream) { console.log(s);} {}{ answer: "" }{ answer: "The" }{ answer: "The invention" }{ answer: "The invention of" }{ answer: "The invention of the" }{ answer: "The invention of the microscope" }{ answer: "The invention of the microscope is" }{ answer: "The invention of the microscope is attributed" }{ answer: "The invention of the microscope is attributed to" }{ answer: "The invention of the microscope is attributed to Hans" }{ answer: "The invention of the microscope is attributed to Hans L" }{ answer: "The invention of the microscope is attributed to Hans Lippers"}{ answer: "The invention of the microscope is attributed to Hans Lippershey"}{ answer: "The invention of the microscope is attributed to Hans Lippershey,"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zach"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Jans"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen,"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Anton"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 4 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 8 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 12 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 13 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 18 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 20 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 26 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 29 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 33 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 38 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 43 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 48 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 51 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 52 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 57 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 63 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 73 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 80 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 81 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 85 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 94 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 99 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 108 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 112 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 118 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 127 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 138 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 145 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 149 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 150 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 151 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 157 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 159 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 163 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 167 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 171 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 175 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 176 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 181 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 186 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 190 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 202 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 203 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 209 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 214 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 226 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 239 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 242 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 246 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 253 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 257 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 262 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 265 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 268 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 273 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 288 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 300 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 303 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 311 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 316 more characters}{ answer: "The invention of the microscope is attributed to Hans Lippershey, Zacharias Janssen, and Antonie van"... 317 more characters} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve learned about using output parsers to parse structured outputs from prompted model outputs. Next, check out the [guide on tool calling](/v0.2/docs/how_to/tool_calling), a more built-in way of obtaining structured output that some model providers support, or read more about output parsers for other types of structured data like [XML](/v0.2/docs/how_to/output_parser_xml). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use few shot examples ](/v0.2/docs/how_to/few_shot_examples)[ Next How to return structured data from a model ](/v0.2/docs/how_to/structured_output) * [Get started](#get-started) * [Validation](#validation) * [Streaming](#streaming) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/people/
People ====== There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐! This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews. Top reviewers[​](#top-reviewers "Direct link to Top reviewers") --------------------------------------------------------------- As LangChain has grown, the amount of surface area that maintainers cover has grown as well. Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏! [![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg) [![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean) [![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo) [![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi) [![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal) [![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep) Top recent contributors[​](#top-recent-contributors "Direct link to Top recent contributors") --------------------------------------------------------------------------------------------- The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact. Thank you all so much for your time and efforts in making LangChain better ❤️! [![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied) [![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee) [![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg) [![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008) [![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow) [![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora) [![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong) [![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan) [![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo) [![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi) [![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant) [![Avatar for mishushakov](https://avatars.githubusercontent.com/u/10400064?u=581d97314df325c15ec221f64834003d3bba5cc1&v=4)](https://github.com/mishushakov)[@mishushakov](https://github.com/mishushakov) [![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase) [![Avatar for CahidArda](https://avatars.githubusercontent.com/u/57228345?v=4)](https://github.com/CahidArda)[@CahidArda](https://github.com/CahidArda) [![Avatar for sarangan12](https://avatars.githubusercontent.com/u/602456?u=d39962c60b0ac5fea4e97cb67433a42c736c3c5b&v=4)](https://github.com/sarangan12)[@sarangan12](https://github.com/sarangan12) [![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan) [![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f) [![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra) [![Avatar for TeCHiScy](https://avatars.githubusercontent.com/u/741195?u=e5937011ef84ff8a4b4b62ac1926a291c04f5d8b&v=4)](https://github.com/TeCHiScy)[@TeCHiScy](https://github.com/TeCHiScy) [![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap) Core maintainers[​](#core-maintainers "Direct link to Core maintainers") ------------------------------------------------------------------------ Hello there 👋! We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths with at least one of us already. [![Avatar for bracesproul](https://avatars.githubusercontent.com/u/46789226?u=83f467441c4b542b900fe2bb8fe45e26bf918da0&v=4)](https://github.com/bracesproul)[@bracesproul](https://github.com/bracesproul) [![Avatar for dqbd](https://avatars.githubusercontent.com/u/1443449?u=fe32372ae8f497065ef0a1c54194d9dff36fb81d&v=4)](https://github.com/dqbd)[@dqbd](https://github.com/dqbd) [![Avatar for hwchase17](https://avatars.githubusercontent.com/u/11986836?u=f4c4f21a82b2af6c9f91e1f1d99ea40062f7a101&v=4)](https://github.com/hwchase17)[@hwchase17](https://github.com/hwchase17) [![Avatar for nfcampos](https://avatars.githubusercontent.com/u/56902?u=fdb30e802c68bc338dd9c0820f713e4fdac75db7&v=4)](https://github.com/nfcampos)[@nfcampos](https://github.com/nfcampos) [![Avatar for jacoblee93](https://avatars.githubusercontent.com/u/6952323?u=d785f9406c5a78ebd75922567b2693fb643c3bb0&v=4)](https://github.com/jacoblee93)[@jacoblee93](https://github.com/jacoblee93) Top all-time contributors[​](#top-all-time-contributors "Direct link to Top all-time contributors") --------------------------------------------------------------------------------------------------- And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟: [![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg) [![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi) [![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal) [![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean) [![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied) [![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo) [![Avatar for skarard](https://avatars.githubusercontent.com/u/602085?u=f8a9736cfa9fe8875d19861b0276e24de8f3d0a0&v=4)](https://github.com/skarard)[@skarard](https://github.com/skarard) [![Avatar for chasemcdo](https://avatars.githubusercontent.com/u/74692158?u=9c25a170d24cc30f10eafc4d44a38067cdf5eed8&v=4)](https://github.com/chasemcdo)[@chasemcdo](https://github.com/chasemcdo) [![Avatar for MaximeThoonsen](https://avatars.githubusercontent.com/u/4814551?u=efb35c6a7dc1ce99dfa8ac8f0f1314cdb4fddfe1&v=4)](https://github.com/MaximeThoonsen)[@MaximeThoonsen](https://github.com/MaximeThoonsen) [![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee) [![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep) [![Avatar for ysnows](https://avatars.githubusercontent.com/u/11255869?u=b0b519b6565c43d01795ba092521c8677f30134c&v=4)](https://github.com/ysnows)[@ysnows](https://github.com/ysnows) [![Avatar for tyumentsev4](https://avatars.githubusercontent.com/u/56769451?u=088102b6160822bc68c25a2a5df170080d0b16a2&v=4)](https://github.com/tyumentsev4)[@tyumentsev4](https://github.com/tyumentsev4) [![Avatar for nickscamara](https://avatars.githubusercontent.com/u/20311743?u=29bf2391ae34297a12a88d813731b0bdf289e4a5&v=4)](https://github.com/nickscamara)[@nickscamara](https://github.com/nickscamara) [![Avatar for nigel-daniels](https://avatars.githubusercontent.com/u/4641452?v=4)](https://github.com/nigel-daniels)[@nigel-daniels](https://github.com/nigel-daniels) [![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan) [![Avatar for malandis](https://avatars.githubusercontent.com/u/3690240?v=4)](https://github.com/malandis)[@malandis](https://github.com/malandis) [![Avatar for danielchalef](https://avatars.githubusercontent.com/u/131175?u=332fe36f12d9ffe9e4414dc776b381fe801a9c53&v=4)](https://github.com/danielchalef)[@danielchalef](https://github.com/danielchalef) [![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008) [![Avatar for mfortman11](https://avatars.githubusercontent.com/u/6100513?u=c758a02fc05dc36315fcfadfccd6208883436cb8&v=4)](https://github.com/mfortman11)[@mfortman11](https://github.com/mfortman11) [![Avatar for kwkr](https://avatars.githubusercontent.com/u/20127759?v=4)](https://github.com/kwkr)[@kwkr](https://github.com/kwkr) [![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan) [![Avatar for ewfian](https://avatars.githubusercontent.com/u/12423122?u=681de0c470e9b349963ee935ddfd6b2e097e7181&v=4)](https://github.com/ewfian)[@ewfian](https://github.com/ewfian) [![Avatar for Swimburger](https://avatars.githubusercontent.com/u/3382717?u=5a84a173b0e80effc9161502c0848bf06c84bde9&v=4)](https://github.com/Swimburger)[@Swimburger](https://github.com/Swimburger) [![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow) [![Avatar for sarangan12](https://avatars.githubusercontent.com/u/602456?u=d39962c60b0ac5fea4e97cb67433a42c736c3c5b&v=4)](https://github.com/sarangan12)[@sarangan12](https://github.com/sarangan12) [![Avatar for jasondotparse](https://avatars.githubusercontent.com/u/13938372?u=0e3f80aa515c41b7d9084b73d761cad378ebdc7a&v=4)](https://github.com/jasondotparse)[@jasondotparse](https://github.com/jasondotparse) [![Avatar for mishushakov](https://avatars.githubusercontent.com/u/10400064?u=581d97314df325c15ec221f64834003d3bba5cc1&v=4)](https://github.com/mishushakov)[@mishushakov](https://github.com/mishushakov) [![Avatar for kristianfreeman](https://avatars.githubusercontent.com/u/922353?u=ad00df1efd8f04a469de6087ee3cd7d7012533f7&v=4)](https://github.com/kristianfreeman)[@kristianfreeman](https://github.com/kristianfreeman) [![Avatar for neebdev](https://avatars.githubusercontent.com/u/94310799?u=b6f604bc6c3a6380f0b83025ca94e2e22179ac2a&v=4)](https://github.com/neebdev)[@neebdev](https://github.com/neebdev) [![Avatar for tsg](https://avatars.githubusercontent.com/u/101817?u=39f31ff29d2589046148c6ed1c1c923982d86b1a&v=4)](https://github.com/tsg)[@tsg](https://github.com/tsg) [![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase) [![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi) [![Avatar for zackproser](https://avatars.githubusercontent.com/u/1769996?u=3555434bbfa99f2267f30ded67a15132e3a7bd27&v=4)](https://github.com/zackproser)[@zackproser](https://github.com/zackproser) [![Avatar for justindra](https://avatars.githubusercontent.com/u/4289486?v=4)](https://github.com/justindra)[@justindra](https://github.com/justindra) [![Avatar for vincelwt](https://avatars.githubusercontent.com/u/5092466?u=713f9947e4315b6f0ef62ec5cccd978133006783&v=4)](https://github.com/vincelwt)[@vincelwt](https://github.com/vincelwt) [![Avatar for cwoolum](https://avatars.githubusercontent.com/u/942415?u=8210ef711d1666ec234db9a0c4a9b32fd9f36593&v=4)](https://github.com/cwoolum)[@cwoolum](https://github.com/cwoolum) [![Avatar for sunner](https://avatars.githubusercontent.com/u/255413?v=4)](https://github.com/sunner)[@sunner](https://github.com/sunner) [![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora) [![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong) [![Avatar for mayooear](https://avatars.githubusercontent.com/u/107035552?u=708ca9b002559f6175803a80a1e47f3e84ba91e2&v=4)](https://github.com/mayooear)[@mayooear](https://github.com/mayooear) [![Avatar for chitalian](https://avatars.githubusercontent.com/u/26822232?u=accedd106a5e9d8335cb631c1bfe84b8cc494083&v=4)](https://github.com/chitalian)[@chitalian](https://github.com/chitalian) [![Avatar for paaatrrrick](https://avatars.githubusercontent.com/u/88113528?u=23275c7b8928a38b34195358ea9f4d057fe1e171&v=4)](https://github.com/paaatrrrick)[@paaatrrrick](https://github.com/paaatrrrick) [![Avatar for alexleventer](https://avatars.githubusercontent.com/u/3254549?u=794d178a761379e162a1092c556e98a9ec5c2410&v=4)](https://github.com/alexleventer)[@alexleventer](https://github.com/alexleventer) [![Avatar for 3eif](https://avatars.githubusercontent.com/u/29833473?u=37b8f7a25883ee98bc6b6bd6029c6d5479724e2f&v=4)](https://github.com/3eif)[@3eif](https://github.com/3eif) [![Avatar for BitVoyagerMan](https://avatars.githubusercontent.com/u/121993229?u=717ed7012c040d5bf3a8ff1fd695a6a4f1ff0626&v=4)](https://github.com/BitVoyagerMan)[@BitVoyagerMan](https://github.com/BitVoyagerMan) [![Avatar for xixixao](https://avatars.githubusercontent.com/u/1473433?u=c4bf1cf9f8699c8647894cd226c0bf9124bdad58&v=4)](https://github.com/xixixao)[@xixixao](https://github.com/xixixao) [![Avatar for jo32](https://avatars.githubusercontent.com/u/501632?u=a714d65c000d8f489f9fc2363f9a372b0dba05e3&v=4)](https://github.com/jo32)[@jo32](https://github.com/jo32) [![Avatar for RohitMidha23](https://avatars.githubusercontent.com/u/38888530?u=5c4b99eff970e551e5b756f270aa5234bc666316&v=4)](https://github.com/RohitMidha23)[@RohitMidha23](https://github.com/RohitMidha23) [![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f) [![Avatar for konstantinov-raft](https://avatars.githubusercontent.com/u/105433902?v=4)](https://github.com/konstantinov-raft)[@konstantinov-raft](https://github.com/konstantinov-raft) [![Avatar for volodymyr-memsql](https://avatars.githubusercontent.com/u/57520563?v=4)](https://github.com/volodymyr-memsql)[@volodymyr-memsql](https://github.com/volodymyr-memsql) [![Avatar for jameshfisher](https://avatars.githubusercontent.com/u/166966?u=b78059abca798fbce8c9da4f6ddfb72ea03b20bb&v=4)](https://github.com/jameshfisher)[@jameshfisher](https://github.com/jameshfisher) [![Avatar for the-powerpointer](https://avatars.githubusercontent.com/u/134403026?u=ddd77b62b35c5497ae3d846f8917bdd81e5ef19e&v=4)](https://github.com/the-powerpointer)[@the-powerpointer](https://github.com/the-powerpointer) [![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant) [![Avatar for MthwRobinson](https://avatars.githubusercontent.com/u/1635179?u=0631cb84ca580089198114f94d9c27efe730220e&v=4)](https://github.com/MthwRobinson)[@MthwRobinson](https://github.com/MthwRobinson) [![Avatar for SimonPrammer](https://avatars.githubusercontent.com/u/44960995?u=a513117a60e9f1aa09247ec916018ee272897169&v=4)](https://github.com/SimonPrammer)[@SimonPrammer](https://github.com/SimonPrammer) [![Avatar for munkhorgil](https://avatars.githubusercontent.com/u/978987?u=eff77a6f7bc4edbace4929731638d4727923013f&v=4)](https://github.com/munkhorgil)[@munkhorgil](https://github.com/munkhorgil) [![Avatar for alx13](https://avatars.githubusercontent.com/u/1572864?v=4)](https://github.com/alx13)[@alx13](https://github.com/alx13) [![Avatar for castroCrea](https://avatars.githubusercontent.com/u/20707343?u=25e872c764bd31b71148f2dec896f64be5e034ff&v=4)](https://github.com/castroCrea)[@castroCrea](https://github.com/castroCrea) [![Avatar for samheutmaker](https://avatars.githubusercontent.com/u/1767032?u=a50f2b3b339eb965b9c812977aa10d64202e2e95&v=4)](https://github.com/samheutmaker)[@samheutmaker](https://github.com/samheutmaker) [![Avatar for archie-swif](https://avatars.githubusercontent.com/u/2158707?u=8a0aeee45e93ba575321804a7b709bf8897941de&v=4)](https://github.com/archie-swif)[@archie-swif](https://github.com/archie-swif) [![Avatar for valdo99](https://avatars.githubusercontent.com/u/41517614?u=ba37c9a21db3068953ae50d90c1cd07c3dec3abd&v=4)](https://github.com/valdo99)[@valdo99](https://github.com/valdo99) [![Avatar for gmpetrov](https://avatars.githubusercontent.com/u/4693180?u=8cf781d9099d6e2f2d2caf7612a5c2811ba13ef8&v=4)](https://github.com/gmpetrov)[@gmpetrov](https://github.com/gmpetrov) [![Avatar for mattzcarey](https://avatars.githubusercontent.com/u/77928207?u=fc8febe2a4b67384046eb4041b325bb34665d59c&v=4)](https://github.com/mattzcarey)[@mattzcarey](https://github.com/mattzcarey) [![Avatar for albertpurnama](https://avatars.githubusercontent.com/u/14824254?u=b3acdfc46d3d26d44f66a7312b102172c7ff9722&v=4)](https://github.com/albertpurnama)[@albertpurnama](https://github.com/albertpurnama) [![Avatar for CahidArda](https://avatars.githubusercontent.com/u/57228345?v=4)](https://github.com/CahidArda)[@CahidArda](https://github.com/CahidArda) [![Avatar for yroc92](https://avatars.githubusercontent.com/u/17517541?u=7405432fa828c094e130e8193be3cae04ac96d11&v=4)](https://github.com/yroc92)[@yroc92](https://github.com/yroc92) [![Avatar for Basti-an](https://avatars.githubusercontent.com/u/42387209?u=43ac44545861ce4adec99f973aeea3e6cf9a1bc0&v=4)](https://github.com/Basti-an)[@Basti-an](https://github.com/Basti-an) [![Avatar for CarlosZiegler](https://avatars.githubusercontent.com/u/38855507?u=65c19ae772581fb7367f646ed90be44311e60e70&v=4)](https://github.com/CarlosZiegler)[@CarlosZiegler](https://github.com/CarlosZiegler) [![Avatar for iloveitaly](https://avatars.githubusercontent.com/u/150855?v=4)](https://github.com/iloveitaly)[@iloveitaly](https://github.com/iloveitaly) [![Avatar for dilling](https://avatars.githubusercontent.com/u/5846912?v=4)](https://github.com/dilling)[@dilling](https://github.com/dilling) [![Avatar for anselm94](https://avatars.githubusercontent.com/u/9033201?u=e5f657c3a1657c089d7cb88121e544ae7212e6f1&v=4)](https://github.com/anselm94)[@anselm94](https://github.com/anselm94) [![Avatar for gramliu](https://avatars.githubusercontent.com/u/24856195?u=9f55337506cdcac3146772c56b4634e6b46a5e46&v=4)](https://github.com/gramliu)[@gramliu](https://github.com/gramliu) [![Avatar for jeffchuber](https://avatars.githubusercontent.com/u/891664?u=722172a0061f68ab22819fa88a354ec973f70a63&v=4)](https://github.com/jeffchuber)[@jeffchuber](https://github.com/jeffchuber) [![Avatar for ywkim](https://avatars.githubusercontent.com/u/588581?u=df702e5b817a56476cb0cd8e7587b9be844d2850&v=4)](https://github.com/ywkim)[@ywkim](https://github.com/ywkim) [![Avatar for jirimoravcik](https://avatars.githubusercontent.com/u/951187?u=e80c215810058f57145042d12360d463e3a53443&v=4)](https://github.com/jirimoravcik)[@jirimoravcik](https://github.com/jirimoravcik) [![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra) [![Avatar for yuku](https://avatars.githubusercontent.com/u/96157?v=4)](https://github.com/yuku)[@yuku](https://github.com/yuku) [![Avatar for conroywhitney](https://avatars.githubusercontent.com/u/249891?u=36703ce68261be59109622877012be08fbc090da&v=4)](https://github.com/conroywhitney)[@conroywhitney](https://github.com/conroywhitney) [![Avatar for Czechh](https://avatars.githubusercontent.com/u/4779936?u=ab072503433effc18c071b31adda307988877d5e&v=4)](https://github.com/Czechh)[@Czechh](https://github.com/Czechh) [![Avatar for adam101](https://avatars.githubusercontent.com/u/1535782?v=4)](https://github.com/adam101)[@adam101](https://github.com/adam101) [![Avatar for OlegIvaniv](https://avatars.githubusercontent.com/u/12657221?v=4)](https://github.com/OlegIvaniv)[@OlegIvaniv](https://github.com/OlegIvaniv) [![Avatar for jaclar](https://avatars.githubusercontent.com/u/362704?u=52d868cc75c793fa895ef7035ae45516bd915e84&v=4)](https://github.com/jaclar)[@jaclar](https://github.com/jaclar) [![Avatar for TeCHiScy](https://avatars.githubusercontent.com/u/741195?u=e5937011ef84ff8a4b4b62ac1926a291c04f5d8b&v=4)](https://github.com/TeCHiScy)[@TeCHiScy](https://github.com/TeCHiScy) [![Avatar for ivoneijr](https://avatars.githubusercontent.com/u/6401435?u=96c11b6333636bd784ffbff72998591f3b3f087b&v=4)](https://github.com/ivoneijr)[@ivoneijr](https://github.com/ivoneijr) [![Avatar for tonisives](https://avatars.githubusercontent.com/u/1083534?v=4)](https://github.com/tonisives)[@tonisives](https://github.com/tonisives) [![Avatar for Njuelle](https://avatars.githubusercontent.com/u/3192870?u=e126aae39f36565450ebc854b35c6e890b705e71&v=4)](https://github.com/Njuelle)[@Njuelle](https://github.com/Njuelle) [![Avatar for Roland0511](https://avatars.githubusercontent.com/u/588050?u=3c91917389117ee84843d961252ab7a2b9097e0e&v=4)](https://github.com/Roland0511)[@Roland0511](https://github.com/Roland0511) [![Avatar for SebastjanPrachovskij](https://avatars.githubusercontent.com/u/86522260?u=66898c89771c7b8ff38958e9fb9563a1cf7f8004&v=4)](https://github.com/SebastjanPrachovskij)[@SebastjanPrachovskij](https://github.com/SebastjanPrachovskij) [![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap) [![Avatar for dylanintech](https://avatars.githubusercontent.com/u/86082012?u=6516bbf39c5af198123d8ed2e35fff5d200f4d2e&v=4)](https://github.com/dylanintech)[@dylanintech](https://github.com/dylanintech) [![Avatar for andrewnguonly](https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4)](https://github.com/andrewnguonly)[@andrewnguonly](https://github.com/andrewnguonly) [![Avatar for ShaunBaker](https://avatars.githubusercontent.com/u/1176557?u=c2e8ecfb45b736fc4d3bbfe182e26936bd519fd3&v=4)](https://github.com/ShaunBaker)[@ShaunBaker](https://github.com/ShaunBaker) [![Avatar for machulav](https://avatars.githubusercontent.com/u/2857712?u=6809bef8bf07c46b39cd2fcd6027ed86e76372cd&v=4)](https://github.com/machulav)[@machulav](https://github.com/machulav) [![Avatar for dersia](https://avatars.githubusercontent.com/u/1537958?u=5da46ca1cd93c6fed927c612fc454ba51d0a36b1&v=4)](https://github.com/dersia)[@dersia](https://github.com/dersia) [![Avatar for joshsny](https://avatars.githubusercontent.com/u/7135900?u=109e43c5e906a8ecc1a2d465c4457f5cf29328a5&v=4)](https://github.com/joshsny)[@joshsny](https://github.com/joshsny) [![Avatar for jl4nz](https://avatars.githubusercontent.com/u/94814971?u=266358610eeb54c3393dc127718dd6a997fdbf52&v=4)](https://github.com/jl4nz)[@jl4nz](https://github.com/jl4nz) [![Avatar for eactisgrosso](https://avatars.githubusercontent.com/u/2279003?u=d122874eedb211359d4bf0119877d74ea7d5bcab&v=4)](https://github.com/eactisgrosso)[@eactisgrosso](https://github.com/eactisgrosso) [![Avatar for frankolson](https://avatars.githubusercontent.com/u/6773706?u=738775762205a07fd7de297297c99f781e957c58&v=4)](https://github.com/frankolson)[@frankolson](https://github.com/frankolson) [![Avatar for uthmanmoh](https://avatars.githubusercontent.com/u/83053931?u=5c715d2d4f6786fa749276de8eced710be8bfa99&v=4)](https://github.com/uthmanmoh)[@uthmanmoh](https://github.com/uthmanmoh) [![Avatar for Jordan-Gilliam](https://avatars.githubusercontent.com/u/25993686?u=319a6ed2119197d4d11301614a104ae686f9fc70&v=4)](https://github.com/Jordan-Gilliam)[@Jordan-Gilliam](https://github.com/Jordan-Gilliam) [![Avatar for winor30](https://avatars.githubusercontent.com/u/12413150?u=691a5e076bdd8c9e9fd637a41496b29e11b0c82f&v=4)](https://github.com/winor30)[@winor30](https://github.com/winor30) [![Avatar for willemmulder](https://avatars.githubusercontent.com/u/70933?u=206fafc72fd14b4291cb29269c5e1cc8081d043b&v=4)](https://github.com/willemmulder)[@willemmulder](https://github.com/willemmulder) [![Avatar for aixgeek](https://avatars.githubusercontent.com/u/9697715?u=d139c5568375c2472ac6142325e6856cd766d88d&v=4)](https://github.com/aixgeek)[@aixgeek](https://github.com/aixgeek) [![Avatar for seuha516](https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4)](https://github.com/seuha516)[@seuha516](https://github.com/seuha516) [![Avatar for mhart](https://avatars.githubusercontent.com/u/367936?v=4)](https://github.com/mhart)[@mhart](https://github.com/mhart) [![Avatar for mvaker](https://avatars.githubusercontent.com/u/5671913?u=2e237cb1dd51f9d0dd01f0deb80003163641fc49&v=4)](https://github.com/mvaker)[@mvaker](https://github.com/mvaker) [![Avatar for vitaly-ps](https://avatars.githubusercontent.com/u/141448200?u=a3902a9c11399c916f1af2bf0ead901e7afe1a67&v=4)](https://github.com/vitaly-ps)[@vitaly-ps](https://github.com/vitaly-ps) [![Avatar for cbh123](https://avatars.githubusercontent.com/u/14149230?u=ca710ca2a64391470163ddef6b5ea7633ab26872&v=4)](https://github.com/cbh123)[@cbh123](https://github.com/cbh123) [![Avatar for Neverland3124](https://avatars.githubusercontent.com/u/52025513?u=865e861a1abb0d78be587f685d28fe8a00aee8fe&v=4)](https://github.com/Neverland3124)[@Neverland3124](https://github.com/Neverland3124) [![Avatar for jasonnathan](https://avatars.githubusercontent.com/u/780157?u=d5efec16b5e3a9913dc44967059a70d9a610755d&v=4)](https://github.com/jasonnathan)[@jasonnathan](https://github.com/jasonnathan) [![Avatar for Maanethdesilva](https://avatars.githubusercontent.com/u/94875583?v=4)](https://github.com/Maanethdesilva)[@Maanethdesilva](https://github.com/Maanethdesilva) [![Avatar for fuleinist](https://avatars.githubusercontent.com/u/1163738?v=4)](https://github.com/fuleinist)[@fuleinist](https://github.com/fuleinist) [![Avatar for kwadhwa18](https://avatars.githubusercontent.com/u/6015244?u=a127081404b8dc16ac0e84a869dfff4ac82bbab2&v=4)](https://github.com/kwadhwa18)[@kwadhwa18](https://github.com/kwadhwa18) [![Avatar for sousousore1](https://avatars.githubusercontent.com/u/624438?v=4)](https://github.com/sousousore1)[@sousousore1](https://github.com/sousousore1) [![Avatar for seth-25](https://avatars.githubusercontent.com/u/49222652?u=203c2bef6cbb77668a289b8272aea4fb654558d5&v=4)](https://github.com/seth-25)[@seth-25](https://github.com/seth-25) [![Avatar for tomi-mercado](https://avatars.githubusercontent.com/u/60221771?u=f8c1214535e402b0ff5c3428bfe98b586b517106&v=4)](https://github.com/tomi-mercado)[@tomi-mercado](https://github.com/tomi-mercado) [![Avatar for JHeidinga](https://avatars.githubusercontent.com/u/1702015?u=fa33fb709707e2429f10fbb824abead61628d50c&v=4)](https://github.com/JHeidinga)[@JHeidinga](https://github.com/JHeidinga) [![Avatar for niklas-lohmann](https://avatars.githubusercontent.com/u/68230177?v=4)](https://github.com/niklas-lohmann)[@niklas-lohmann](https://github.com/niklas-lohmann) [![Avatar for Durisvk](https://avatars.githubusercontent.com/u/8467003?u=f07b8c070eaed3ad8972be4f4ca91afb1ae6e2c0&v=4)](https://github.com/Durisvk)[@Durisvk](https://github.com/Durisvk) [![Avatar for BjoernRave](https://avatars.githubusercontent.com/u/36173920?u=c3acae11221a037c16254e2187555ea6259d89c3&v=4)](https://github.com/BjoernRave)[@BjoernRave](https://github.com/BjoernRave) [![Avatar for crazyurus](https://avatars.githubusercontent.com/u/2209055?u=b39f7e70f137ff3d1785d261cb15067f0d91ae05&v=4)](https://github.com/crazyurus)[@crazyurus](https://github.com/crazyurus) [![Avatar for qalqi](https://avatars.githubusercontent.com/u/1781048?u=837879a7e62c6b3736dc39a31ff42873bee2c532&v=4)](https://github.com/qalqi)[@qalqi](https://github.com/qalqi) [![Avatar for katarinasupe](https://avatars.githubusercontent.com/u/61758502?u=20cdcb0bae81b9eb330c94f7cfae462327785219&v=4)](https://github.com/katarinasupe)[@katarinasupe](https://github.com/katarinasupe) [![Avatar for andrewlei](https://avatars.githubusercontent.com/u/1158058?v=4)](https://github.com/andrewlei)[@andrewlei](https://github.com/andrewlei) [![Avatar for floomby](https://avatars.githubusercontent.com/u/3113021?v=4)](https://github.com/floomby)[@floomby](https://github.com/floomby) [![Avatar for milanjrodd](https://avatars.githubusercontent.com/u/121220673?u=55636f26ea48e77e0372008089ff2c38691eaa0a&v=4)](https://github.com/milanjrodd)[@milanjrodd](https://github.com/milanjrodd) [![Avatar for NickMandylas](https://avatars.githubusercontent.com/u/19514618?u=95f8c29ed06696260722c2c6aa7bac3a1136d7a2&v=4)](https://github.com/NickMandylas)[@NickMandylas](https://github.com/NickMandylas) [![Avatar for DravenCat](https://avatars.githubusercontent.com/u/55412122?v=4)](https://github.com/DravenCat)[@DravenCat](https://github.com/DravenCat) [![Avatar for Alireza29675](https://avatars.githubusercontent.com/u/2771377?u=65ec71f9860ac2610e1cb5028173f67713a174d7&v=4)](https://github.com/Alireza29675)[@Alireza29675](https://github.com/Alireza29675) [![Avatar for zhengxs2018](https://avatars.githubusercontent.com/u/7506913?u=42c32ca59ae2e44532cd45027e5b62d2712cf2a2&v=4)](https://github.com/zhengxs2018)[@zhengxs2018](https://github.com/zhengxs2018) [![Avatar for clemenspeters](https://avatars.githubusercontent.com/u/13015002?u=059c556d90a2e5639dee42123077d51223c190f0&v=4)](https://github.com/clemenspeters)[@clemenspeters](https://github.com/clemenspeters) [![Avatar for cmtoomey](https://avatars.githubusercontent.com/u/12201602?u=ea5cbb8d158980f6050dd41ae41b7f72e0a47337&v=4)](https://github.com/cmtoomey)[@cmtoomey](https://github.com/cmtoomey) [![Avatar for igorshapiro](https://avatars.githubusercontent.com/u/1085209?u=16b60724316a7ed8e8b52af576c121215461922a&v=4)](https://github.com/igorshapiro)[@igorshapiro](https://github.com/igorshapiro) [![Avatar for ezynda3](https://avatars.githubusercontent.com/u/5308871?v=4)](https://github.com/ezynda3)[@ezynda3](https://github.com/ezynda3) [![Avatar for more-by-more](https://avatars.githubusercontent.com/u/67614844?u=d3d818efb3e3e2ddda589d6157f853922a460f5b&v=4)](https://github.com/more-by-more)[@more-by-more](https://github.com/more-by-more) [![Avatar for noble-varghese](https://avatars.githubusercontent.com/u/109506617?u=c1d2a1813c51bff89bfa85d533633ed4c201ba2e&v=4)](https://github.com/noble-varghese)[@noble-varghese](https://github.com/noble-varghese) [![Avatar for SananR](https://avatars.githubusercontent.com/u/14956384?u=538ff9bf09497059b312067333f68eba75594802&v=4)](https://github.com/SananR)[@SananR](https://github.com/SananR) [![Avatar for fraserxu](https://avatars.githubusercontent.com/u/1183541?v=4)](https://github.com/fraserxu)[@fraserxu](https://github.com/fraserxu) [![Avatar for ashvardanian](https://avatars.githubusercontent.com/u/1983160?u=536f2558c6ac33b74a6d89520dcb27ba46954070&v=4)](https://github.com/ashvardanian)[@ashvardanian](https://github.com/ashvardanian) [![Avatar for adeelehsan](https://avatars.githubusercontent.com/u/8156837?u=99cacfbd962ff58885bdf68e5fc640fc0d3cb87c&v=4)](https://github.com/adeelehsan)[@adeelehsan](https://github.com/adeelehsan) [![Avatar for henriquegdantas](https://avatars.githubusercontent.com/u/12974790?u=80d76f256a7854da6ae441b6ee078119877398e7&v=4)](https://github.com/henriquegdantas)[@henriquegdantas](https://github.com/henriquegdantas) [![Avatar for evad1n](https://avatars.githubusercontent.com/u/50718218?u=ee35784971ef8dcdfdb25cfe0a8284ca48724938&v=4)](https://github.com/evad1n)[@evad1n](https://github.com/evad1n) [![Avatar for benjibc](https://avatars.githubusercontent.com/u/1585539?u=654a21985c875f78a20eda7e4884e8d64de86fba&v=4)](https://github.com/benjibc)[@benjibc](https://github.com/benjibc) [![Avatar for P-E-B](https://avatars.githubusercontent.com/u/38215315?u=3985b6a3ecb0e8338c5912ea9e20787152d0ad7a&v=4)](https://github.com/P-E-B)[@P-E-B](https://github.com/P-E-B) [![Avatar for omikader](https://avatars.githubusercontent.com/u/16735699?u=29fc7c7c777c3cabc22449b68bbb01fe2fa0b574&v=4)](https://github.com/omikader)[@omikader](https://github.com/omikader) [![Avatar for jasongill](https://avatars.githubusercontent.com/u/241711?v=4)](https://github.com/jasongill)[@jasongill](https://github.com/jasongill) [![Avatar for Luisotee](https://avatars.githubusercontent.com/u/50471205?u=059d6ab166e5a32c496ff50ef6e3fb0ca04a50ad&v=4)](https://github.com/Luisotee)[@Luisotee](https://github.com/Luisotee) [![Avatar for puigde](https://avatars.githubusercontent.com/u/83642160?u=7e76b13b7484e4601bea47dc6e238c89d453a24d&v=4)](https://github.com/puigde)[@puigde](https://github.com/puigde) [![Avatar for Adrastopoulos](https://avatars.githubusercontent.com/u/76796897?u=0bd50d301b4c7025f29396af44c8e1829eff1db6&v=4)](https://github.com/Adrastopoulos)[@Adrastopoulos](https://github.com/Adrastopoulos) [![Avatar for chase-crumbaugh](https://avatars.githubusercontent.com/u/90289500?u=0129550ecfbb4a92922fff7a406566a47a23dfb0&v=4)](https://github.com/chase-crumbaugh)[@chase-crumbaugh](https://github.com/chase-crumbaugh) [![Avatar for Zeneos](https://avatars.githubusercontent.com/u/95008961?v=4)](https://github.com/Zeneos)[@Zeneos](https://github.com/Zeneos) [![Avatar for joseanu](https://avatars.githubusercontent.com/u/2730127?u=9fe1d593bd63c7f116b9c46e9cbd359a2e4304f0&v=4)](https://github.com/joseanu)[@joseanu](https://github.com/joseanu) [![Avatar for JackFener](https://avatars.githubusercontent.com/u/20380671?u=b51d10b71850203e6360655fa59cc679c5a498e6&v=4)](https://github.com/JackFener)[@JackFener](https://github.com/JackFener) [![Avatar for swyxio](https://avatars.githubusercontent.com/u/6764957?u=97ad815028595b73b06ee4b0510e66bbe391228d&v=4)](https://github.com/swyxio)[@swyxio](https://github.com/swyxio) [![Avatar for pczekaj](https://avatars.githubusercontent.com/u/1460539?u=24c2db4a29757f608a54a062340a466cad843825&v=4)](https://github.com/pczekaj)[@pczekaj](https://github.com/pczekaj) [![Avatar for devinburnette](https://avatars.githubusercontent.com/u/13012689?u=7b68c67ea1bbc272c35be7c0bcf1c66a04554179&v=4)](https://github.com/devinburnette)[@devinburnette](https://github.com/devinburnette) [![Avatar for ananis25](https://avatars.githubusercontent.com/u/16446513?u=5026326ed39bfee8325c30cdbd24ac20519d21b8&v=4)](https://github.com/ananis25)[@ananis25](https://github.com/ananis25) [![Avatar for joaopcm](https://avatars.githubusercontent.com/u/58827242?u=3e03812a1074f2ce888b751c48e78a849c7e0aff&v=4)](https://github.com/joaopcm)[@joaopcm](https://github.com/joaopcm) [![Avatar for SalehHindi](https://avatars.githubusercontent.com/u/15721377?u=37fadd6a7bf9dfa63ceb866bda23ca44a7b2c0c2&v=4)](https://github.com/SalehHindi)[@SalehHindi](https://github.com/SalehHindi) [![Avatar for JamsheedMistri](https://avatars.githubusercontent.com/u/13024750?u=6ae631199ec7c0bb34eb8d56200023cdd94720d3&v=4)](https://github.com/JamsheedMistri)[@JamsheedMistri](https://github.com/JamsheedMistri) [![Avatar for cmanou](https://avatars.githubusercontent.com/u/683160?u=e9050e4341c2c9d46b035ea17ea94234634e1b2c&v=4)](https://github.com/cmanou)[@cmanou](https://github.com/cmanou) [![Avatar for micahriggan](https://avatars.githubusercontent.com/u/3626473?u=508e8c831d8eb804e95985d5191a08c761544fad&v=4)](https://github.com/micahriggan)[@micahriggan](https://github.com/micahriggan) [![Avatar for ovuruska](https://avatars.githubusercontent.com/u/75265893?u=7f11152d07f1719da22084388c09b5fc64ab6c89&v=4)](https://github.com/ovuruska)[@ovuruska](https://github.com/ovuruska) [![Avatar for w00ing](https://avatars.githubusercontent.com/u/29723695?u=563d4a628c9af35f827f476e38635310f1cec114&v=4)](https://github.com/w00ing)[@w00ing](https://github.com/w00ing) [![Avatar for madmed88](https://avatars.githubusercontent.com/u/1579388?u=62ca1bfe7c271b5fd1d77abc470aa5e535b1ed83&v=4)](https://github.com/madmed88)[@madmed88](https://github.com/madmed88) [![Avatar for ardsh](https://avatars.githubusercontent.com/u/23664687?u=158ef7e156a7881b8647ece63683aca2c28f132e&v=4)](https://github.com/ardsh)[@ardsh](https://github.com/ardsh) [![Avatar for JoeABCDEF](https://avatars.githubusercontent.com/u/39638510?u=f5fac0a3578572817b37a6dfc00adacb705ec7d0&v=4)](https://github.com/JoeABCDEF)[@JoeABCDEF](https://github.com/JoeABCDEF) [![Avatar for saul-jb](https://avatars.githubusercontent.com/u/2025187?v=4)](https://github.com/saul-jb)[@saul-jb](https://github.com/saul-jb) [![Avatar for JTCorrin](https://avatars.githubusercontent.com/u/73115680?v=4)](https://github.com/JTCorrin)[@JTCorrin](https://github.com/JTCorrin) [![Avatar for zandko](https://avatars.githubusercontent.com/u/37948383?u=04ccf6e060b27e39c931c2608381351cf236a28f&v=4)](https://github.com/zandko)[@zandko](https://github.com/zandko) [![Avatar for federicoestevez](https://avatars.githubusercontent.com/u/10424147?v=4)](https://github.com/federicoestevez)[@federicoestevez](https://github.com/federicoestevez) [![Avatar for martinseanhunt](https://avatars.githubusercontent.com/u/65744?u=ddac1e773828d8058a40bca680cf549e955f69ae&v=4)](https://github.com/martinseanhunt)[@martinseanhunt](https://github.com/martinseanhunt) [![Avatar for functorism](https://avatars.githubusercontent.com/u/17207277?u=4df9bc30a55b4da4b3d6fd20a2956afd722bde24&v=4)](https://github.com/functorism)[@functorism](https://github.com/functorism) [![Avatar for erictt](https://avatars.githubusercontent.com/u/9592198?u=567fa49c73e824525d33eefd836ece16ab9964c8&v=4)](https://github.com/erictt)[@erictt](https://github.com/erictt) [![Avatar for WilliamEspegren](https://avatars.githubusercontent.com/u/131612909?v=4)](https://github.com/WilliamEspegren)[@WilliamEspegren](https://github.com/WilliamEspegren) [![Avatar for lesters](https://avatars.githubusercontent.com/u/5798036?u=4eba31d63c3818d17fb8f9aa923599ac63ebfea8&v=4)](https://github.com/lesters)[@lesters](https://github.com/lesters) [![Avatar for my8bit](https://avatars.githubusercontent.com/u/782268?u=d83da3e6269d53a828bbeb6d661049a1ed185cb0&v=4)](https://github.com/my8bit)[@my8bit](https://github.com/my8bit) [![Avatar for erhant](https://avatars.githubusercontent.com/u/16037166?u=9d056a2f5059684620e22aa4d880e38183309b51&v=4)](https://github.com/erhant)[@erhant](https://github.com/erhant) We're so thankful for your support! And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
null
https://js.langchain.com/v0.2/docs/how_to/migrate_agent
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to migrate from legacy LangChain agents to LangGraph On this page How to migrate from legacy LangChain agents to LangGraph ======================================================== Here we focus on how to move from legacy LangChain agents to LangGraph agents. LangChain agents (the [`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) in particular) have multiple configuration parameters. In this notebook we will show how those parameters map to the LangGraph [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html). For more information on how to build agentic workflows in LangGraph, check out the [docs here](https://langchain-ai.github.io/langgraphjs/how-tos/). #### Prerequisites[​](#prerequisites "Direct link to Prerequisites") This how-to guide uses Anthropic’s `"claude-3-haiku-20240307"` as the LLM. If you are running this guide as a notebook, set your Anthropic API key to run. // process.env.ANTHROPIC_API_KEY = "sk-...";// Optional, add tracing in LangSmith// process.env.LANGCHAIN_API_KEY = "ls...";// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";// process.env.LANGCHAIN_TRACING_V2 = "true";// process.env.LANGCHAIN_PROJECT = "How to migrate: LangGraphJS"; Basic Usage[​](#basic-usage "Direct link to Basic Usage") --------------------------------------------------------- For basic creation and usage of a tool-calling ReAct-style agent, the functionality is the same. First, let’s define a model and tool(s), then we’ll use those to create an agent. The `tool` function is available in `@langchain/core` version 0.2.7 and above. If you are on an older version of core, you should use instantiate and use [`DynamicStructuredTool`](https://api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) instead. import { tool } from "@langchain/core/tools";import { z } from "zod";import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-haiku-20240307", temperature: 0,});const magicTool = tool( async ({ input }: { input: number }) => { return `${input + 2}`; }, { name: "magic_function", description: "Applies a magic function to an input.", schema: z.object({ input: z.number(), }), });const tools = [magicTool];const query = "what is the value of magic_function(3)?"; For the LangChain [`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html), we define a prompt with a placeholder for the agent’s scratchpad. The agent can be invoked as follows: import { ChatPromptTemplate } from "@langchain/core/prompts";import { createToolCallingAgent } from "langchain/agents";import { AgentExecutor } from "langchain/agents";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const agent = createToolCallingAgent({ llm, tools, prompt });const agentExecutor = new AgentExecutor({ agent, tools });await agentExecutor.invoke({ input: query }); { input: "what is the value of magic_function(3)?", output: "The value of magic_function(3) is 5."} LangGraph’s off-the-shelf [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html) manages a state that is defined by a list of messages. In a similar way to the `AgentExecutor`, it will continue to process the list until there are no tool calls in the agent’s output. To kick it off, we input a list of messages. The output will contain the entire state of the graph - in this case, the conversation history and messages representing intermediate tool calls: import { createReactAgent } from "@langchain/langgraph/prebuilt";import { HumanMessage } from "@langchain/core/messages";const app = createReactAgent({ llm, tools });let agentOutput = await app.invoke({ messages: [new HumanMessage(query)],});console.log(agentOutput); { messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "what is the value of magic_function(3)?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what is the value of magic_function(3)?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: [ [Object] ], additional_kwargs: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: [Object] }, tool_calls: [ [Object] ], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ { type: "tool_use", id: "toolu_01WCezi2ywMPnRm1xbrXYPoB", name: "magic_function", input: [Object] } ], name: undefined, additional_kwargs: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, response_metadata: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, tool_calls: [ { name: "magic_function", args: [Object], id: "toolu_01WCezi2ywMPnRm1xbrXYPoB" } ], invalid_tool_calls: [] }, ToolMessage { lc_serializable: true, lc_kwargs: { name: "magic_function", content: "5", tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: "magic_function", additional_kwargs: {}, response_metadata: {}, tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB" }, AIMessage { lc_serializable: true, lc_kwargs: { content: "The value of magic_function(3) is 5.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: [Object] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The value of magic_function(3) is 5.", name: undefined, additional_kwargs: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, response_metadata: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, tool_calls: [], invalid_tool_calls: [] } ]} const messageHistory = agentOutput.messages;const newQuery = "Pardon?";agentOutput = await app.invoke({ messages: [...messageHistory, new HumanMessage(newQuery)],}); { messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "what is the value of magic_function(3)?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what is the value of magic_function(3)?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: [ [Object] ], additional_kwargs: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: [Object] }, tool_calls: [ [Object] ], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ { type: "tool_use", id: "toolu_01WCezi2ywMPnRm1xbrXYPoB", name: "magic_function", input: [Object] } ], name: undefined, additional_kwargs: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, response_metadata: { id: "msg_015jSku8UgrtRQ2kNQuTsvi1", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, tool_calls: [ { name: "magic_function", args: [Object], id: "toolu_01WCezi2ywMPnRm1xbrXYPoB" } ], invalid_tool_calls: [] }, ToolMessage { lc_serializable: true, lc_kwargs: { name: "magic_function", content: "5", tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: "magic_function", additional_kwargs: {}, response_metadata: {}, tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB" }, AIMessage { lc_serializable: true, lc_kwargs: { content: "The value of magic_function(3) is 5.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: [Object] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The value of magic_function(3) is 5.", name: undefined, additional_kwargs: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, response_metadata: { id: "msg_01FbyPvpxtczu2Cmd4vKcPQm", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Pardon?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Pardon?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" + "\n" + "1. You aske"... 52 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_012yLSnnf1c64NWKS9K58hcN", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: [Object] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" + "\n" + "1. You aske"... 52 more characters, name: undefined, additional_kwargs: { id: "msg_012yLSnnf1c64NWKS9K58hcN", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 455, output_tokens: 137 } }, response_metadata: { id: "msg_012yLSnnf1c64NWKS9K58hcN", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 455, output_tokens: 137 } }, tool_calls: [], invalid_tool_calls: [] } ]} Prompt Templates[​](#prompt-templates "Direct link to Prompt Templates") ------------------------------------------------------------------------ With legacy LangChain agents you have to pass in a prompt template. You can use this to control the agent. With LangGraph [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html), by default there is no prompt. You can achieve similar control over the agent in a few ways: 1. Pass in a system message as input 2. Initialize the agent with a system message 3. Initialize the agent with a function to transform messages before passing to the model. Let’s take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish. First up, using LangChain’s `AgentExecutor`: const spanishPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant. Respond only in Spanish."], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const spanishAgent = createToolCallingAgent({ llm, tools, prompt: spanishPrompt,});const spanishAgentExecutor = new AgentExecutor({ agent: spanishAgent, tools,});await spanishAgentExecutor.invoke({ input: query }); { input: "what is the value of magic_function(3)?", output: "El valor de magic_function(3) es 5."} Now, let’s pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html). This can either be a string or a LangChain `SystemMessage`. import { SystemMessage } from "@langchain/core/messages";const systemMessage = "You are a helpful assistant. Respond only in Spanish.";// This could also be a SystemMessage object// const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish.");const appWithSystemMessage = createReactAgent({ llm, tools, messageModifier: systemMessage,});agentOutput = await appWithSystemMessage.invoke({ messages: [new HumanMessage(query)],});agentOutput.messages[agentOutput.messages.length - 1]; AIMessage { lc_serializable: true, lc_kwargs: { content: "El valor de magic_function(3) es 5.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01P5VUYbBZoeMaReqBgqFJZa", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 444, output_tokens: 17 } }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "El valor de magic_function(3) es 5.", name: undefined, additional_kwargs: { id: "msg_01P5VUYbBZoeMaReqBgqFJZa", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 444, output_tokens: 17 } }, response_metadata: { id: "msg_01P5VUYbBZoeMaReqBgqFJZa", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 444, output_tokens: 17 } }, tool_calls: [], invalid_tool_calls: []} We can also pass in an arbitrary function. This function should take in a list of messages and output a list of messages. We can do all types of arbitrary formatting of messages here. In this cases, let’s just add a `SystemMessage` to the start of the list of messages. import { BaseMessage, SystemMessage } from "@langchain/core/messages";const modifyMessages = (messages: BaseMessage[]) => { return [ new SystemMessage("You are a helpful assistant. Respond only in Spanish."), ...messages, new HumanMessage("Also say 'Pandemonium!' after the answer."), ];};const appWithMessagesModifier = createReactAgent({ llm, tools, messageModifier: modifyMessages,});agentOutput = await appWithMessagesModifier.invoke({ messages: [new HumanMessage(query)],});console.log({ input: query, output: agentOutput.messages[agentOutput.messages.length - 1].content,}); { input: "what is the value of magic_function(3)?", output: "5. ¡Pandemonium!"} Memory[​](#memory "Direct link to Memory") ------------------------------------------ With LangChain’s [`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html), you could add chat memory classes so it can engage in a multi-turn conversation. import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const memory = new ChatMessageHistory();const agentExecutorWithMemory = new RunnableWithMessageHistory({ runnable: agentExecutor, getMessageHistory: () => memory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});const config = { configurable: { sessionId: "test-session" } };agentOutput = await agentExecutorWithMemory.invoke( { input: "Hi, I'm polly! What's the output of magic_function of 3?" }, config);console.log(agentOutput.output);agentOutput = await agentExecutorWithMemory.invoke( { input: "Remember my name?" }, config);console.log("---");console.log(agentOutput.output);console.log("---");agentOutput = await agentExecutorWithMemory.invoke( { input: "what was that output again?" }, config);console.log(agentOutput.output); The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the output is 5.---Okay, I remember your name is Polly.---So the output of the magic_function with an input of 3 is 5. #### In LangGraph[​](#in-langgraph "Direct link to In LangGraph") The equivalent to this type of memory in LangGraph is [persistence](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/), and [checkpointing](https://langchain-ai.github.io/langgraphjs/reference/interfaces/index.Checkpoint.html). Add a `checkpointer` to the agent and you get chat memory for free. You’ll need to also pass a `thread_id` within the `configurable` field in the `config` parameter. Notice that we only pass one message into each request, but the model still has context from previous runs: import { MemorySaver } from "@langchain/langgraph";const memory = new MemorySaver();const appWithMemory = createReactAgent({ llm, tools, checkpointSaver: memory,});const config = { configurable: { thread_id: "test-thread", },};agentOutput = await appWithMemory.invoke( { messages: [ new HumanMessage( "Hi, I'm polly! What's the output of magic_function of 3?" ), ], }, config);console.log(agentOutput.messages[agentOutput.messages.length - 1].content);console.log("---");agentOutput = await appWithMemory.invoke( { messages: [new HumanMessage("Remember my name?")], }, config);console.log(agentOutput.messages[agentOutput.messages.length - 1].content);console.log("---");agentOutput = await appWithMemory.invoke( { messages: [new HumanMessage("what was that output again?")], }, config);console.log(agentOutput.messages[agentOutput.messages.length - 1].content); The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the magic_function returns 5.---Ah yes, I remember your name is Polly! It's nice to meet you Polly.---So the magic_function returned an output of 5 for an input of 3. Iterating through steps[​](#iterating-through-steps "Direct link to Iterating through steps") --------------------------------------------------------------------------------------------- With LangChain’s [`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html), you could iterate over the steps using the [`stream`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#stream) method: const langChainStream = await agentExecutor.stream({ input: query });for await (const step of langChainStream) { console.log(step);} { intermediateSteps: [ { action: { tool: "magic_function", toolInput: { input: 3 }, toolCallId: "toolu_01KCJJ8kyiY5LV4RHbVPzK8v", log: 'Invoking "magic_function" with {"input":3}\n' + '[{"type":"tool_use","id":"toolu_01KCJJ8kyiY5LV4RHbVPzK8v"'... 46 more characters, messageLog: [ [AIMessageChunk] ] }, observation: "5" } ]}{ output: "The value of magic_function(3) is 5." } #### In LangGraph[​](#in-langgraph-1 "Direct link to In LangGraph") In LangGraph, things are handled natively using the stream method. const langGraphStream = await app.stream( { messages: [new HumanMessage(query)] }, { streamMode: "updates" });for await (const step of langGraphStream) { console.log(step);} { agent: { messages: [ AIMessage { lc_serializable: true, lc_kwargs: { content: [Array], additional_kwargs: [Object], tool_calls: [Array], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ [Object] ], name: undefined, additional_kwargs: { id: "msg_01WWYeJvJroT82QhJQZKdwSt", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: [Object] }, response_metadata: { id: "msg_01WWYeJvJroT82QhJQZKdwSt", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: [Object] }, tool_calls: [ [Object] ], invalid_tool_calls: [] } ] }}{ tools: { messages: [ ToolMessage { lc_serializable: true, lc_kwargs: { name: "magic_function", content: "5", tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: "magic_function", additional_kwargs: {}, response_metadata: {}, tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C" } ] }}{ agent: { messages: [ AIMessage { lc_serializable: true, lc_kwargs: { content: "The value of magic_function(3) is 5.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: [Object], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The value of magic_function(3) is 5.", name: undefined, additional_kwargs: { id: "msg_012kQPkxt2CrsFw4CsdfNTWr", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: [Object] }, response_metadata: { id: "msg_012kQPkxt2CrsFw4CsdfNTWr", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: [Object] }, tool_calls: [], invalid_tool_calls: [] } ] }} `returnIntermediateSteps`[​](#returnintermediatesteps "Direct link to returnintermediatesteps") ----------------------------------------------------------------------------------------------- Setting this parameter on AgentExecutor allows users to access intermediate\_steps, which pairs agent actions (e.g., tool invocations) with their outcomes. const agentExecutorWithIntermediateSteps = new AgentExecutor({ agent, tools, returnIntermediateSteps: true,});const result = await agentExecutorWithIntermediateSteps.invoke({ input: query,});console.log(result.intermediateSteps); [ { action: { tool: "magic_function", toolInput: { input: 3 }, toolCallId: "toolu_0126dJXbjwLC5daAScz8bw1k", log: 'Invoking "magic_function" with {"input":3}\n' + '[{"type":"tool_use","id":"toolu_0126dJXbjwLC5daAScz8bw1k"'... 46 more characters, messageLog: [ AIMessageChunk { lc_serializable: true, lc_kwargs: [Object], lc_namespace: [Array], content: [Array], name: undefined, additional_kwargs: [Object], response_metadata: {}, tool_calls: [Array], invalid_tool_calls: [], tool_call_chunks: [Array] } ] }, observation: "5" }] By default the [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state. agentOutput = await app.invoke({ messages: [new HumanMessage(query)],});console.log(agentOutput.messages); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "what is the value of magic_function(3)?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what is the value of magic_function(3)?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: [ { type: "tool_use", id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj", name: "magic_function", input: [Object] } ], additional_kwargs: { id: "msg_01BhXyjA2PTwGC5J3JNnfAXY", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, tool_calls: [ { name: "magic_function", args: [Object], id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj" } ], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ { type: "tool_use", id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj", name: "magic_function", input: { input: 3 } } ], name: undefined, additional_kwargs: { id: "msg_01BhXyjA2PTwGC5J3JNnfAXY", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, response_metadata: { id: "msg_01BhXyjA2PTwGC5J3JNnfAXY", model: "claude-3-haiku-20240307", stop_reason: "tool_use", stop_sequence: null, usage: { input_tokens: 365, output_tokens: 53 } }, tool_calls: [ { name: "magic_function", args: { input: 3 }, id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj" } ], invalid_tool_calls: [] }, ToolMessage { lc_serializable: true, lc_kwargs: { name: "magic_function", content: "5", tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: "magic_function", additional_kwargs: {}, response_metadata: {}, tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj" }, AIMessage { lc_serializable: true, lc_kwargs: { content: "The value of magic_function(3) is 5.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01ABtcXJ4CwMHphYYmffQZoF", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The value of magic_function(3) is 5.", name: undefined, additional_kwargs: { id: "msg_01ABtcXJ4CwMHphYYmffQZoF", type: "message", role: "assistant", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, response_metadata: { id: "msg_01ABtcXJ4CwMHphYYmffQZoF", model: "claude-3-haiku-20240307", stop_reason: "end_turn", stop_sequence: null, usage: { input_tokens: 431, output_tokens: 17 } }, tool_calls: [], invalid_tool_calls: [] }] `maxIterations`[​](#maxiterations "Direct link to maxiterations") ----------------------------------------------------------------- `AgentExecutor` implements a `maxIterations` parameter, whereas this is controlled via `recursionLimit` in LangGraph. Note that in the LangChain `AgentExecutor`, an “iteration” includes a full turn of tool invocation and execution. In LangGraph, each step contributes to the recursion limit, so we will need to multiply by two (and add one) to get equivalent results. If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor. const badMagicTool = tool( async ({ input }) => { return "Sorry, there was an error. Please try again."; }, { name: "magic_function", description: "Applies a magic function to an input.", schema: z.object({ input: z.string(), }), });const badTools = [badMagicTool];const spanishAgentExecutorWithMaxIterations = new AgentExecutor({ agent: createToolCallingAgent({ llm, tools: badTools, prompt: spanishPrompt, }), tools: badTools, verbose: true, maxIterations: 2,});await spanishAgentExecutorWithMaxIterations.invoke({ input: query }); [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "what is the value of magic_function(3)?"}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] Entering Chain run with input: { "input": "what is the value of magic_function(3)?", "steps": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] [0ms] Exiting Chain run with output: { "output": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] [1ms] Exiting Chain run with output: { "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] [1ms] Exiting Chain run with output: { "input": "what is the value of magic_function(3)?", "steps": [], "agent_scratchpad": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "what is the value of magic_function(3)?", "steps": [], "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] [0ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant. Respond only in Spanish.", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "what is the value of magic_function(3)?", "additional_kwargs": {}, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant. Respond only in Spanish.", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "what is the value of magic_function(3)?", "additional_kwargs": {}, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] [1.56s] Exiting LLM run with output: { "generations": [ [ { "text": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.", "additional_kwargs": { "id": "msg_011b4GnLtiCRnCzZiqUBAZeH", "type": "message", "role": "assistant", "model": "claude-3-haiku-20240307", "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 378, "output_tokens": 59 } }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.", "additional_kwargs": { "id": "msg_011b4GnLtiCRnCzZiqUBAZeH", "type": "message", "role": "assistant", "model": "claude-3-haiku-20240307", "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 378, "output_tokens": 59 } }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] [0ms] Exiting Chain run with output: { "returnValues": { "output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica." }, "log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] [1.56s] Exiting Chain run with output: { "returnValues": { "output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica." }, "log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."}[chain/end] [1:chain:AgentExecutor] [1.56s] Exiting Chain run with output: { "input": "what is the value of magic_function(3)?", "output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."} { input: "what is the value of magic_function(3)?", output: 'Lo siento, pero la función "magic_function" espera un parámetro de tipo "string", no un número enter'... 103 more characters} import { GraphRecursionError } from "@langchain/langgraph";const RECURSION_LIMIT = 2 * 2 + 1;const appWithBadTools = createReactAgent({ llm, tools: badTools });try { await appWithBadTools.invoke( { messages: [new HumanMessage(query)], }, { recursionLimit: RECURSION_LIMIT, } );} catch (e) { if (e instanceof GraphRecursionError) { console.log("Recursion limit reached."); } else { throw e; }} Recursion limit reached. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add message history ](/v0.2/docs/how_to/message_history)[ Next How to generate multiple embeddings per document ](/v0.2/docs/how_to/multi_vector) * [Basic Usage](#basic-usage) * [Prompt Templates](#prompt-templates) * [Memory](#memory) * [Iterating through steps](#iterating-through-steps) * [`returnIntermediateSteps`](#returnintermediatesteps) * [`maxIterations`](#maxiterations)
null
https://js.langchain.com/v0.2/docs/how_to/message_history
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add message history On this page How to add message history ========================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Configuring chain parameters at runtime](/v0.2/docs/how_to/binding) * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Chat Messages](/v0.2/docs/concepts/#message-types) The `RunnableWithMessageHistory` lets us add message history to certain types of chains. Specifically, it can be used for any Runnable that takes as input one of * a sequence of [`BaseMessages`](/v0.2/docs/concepts/#message-types) * a dict with a key that takes a sequence of `BaseMessage` * a dict with a key that takes the latest message(s) as a string or sequence of `BaseMessage`, and a separate key that takes historical messages And returns as output one of * a string that can be treated as the contents of an `AIMessage` * a sequence of `BaseMessage` * a dict with a key that contains a sequence of `BaseMessage` Let's take a look at some examples to see how it works. Setup[​](#setup "Direct link to Setup") --------------------------------------- We'll use Upstash to store our chat message histories and Anthropic's claude-2 model so we'll need to install the following dependencies: * npm * Yarn * pnpm npm install @langchain/anthropic @langchain/community @upstash/redis yarn add @langchain/anthropic @langchain/community @upstash/redis pnpm add @langchain/anthropic @langchain/community @upstash/redis You'll need to set environment variables for `ANTHROPIC_API_KEY` and grab your Upstash REST url and secret token. ### [LangSmith](https://smith.langchain.com/)[​](#langsmith "Direct link to langsmith") LangSmith is especially useful for something like message history injection, where it can be hard to otherwise understand what the inputs are to various parts of the chain. Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to uncoment the below and set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="<your-api-key>" Let's create a simple runnable that takes a dict as input and returns a `BaseMessage`. In this case the `"question"` key in the input represents our input message, and the `"history"` key is where our historical messages will be injected. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";import { UpstashRedisChatMessageHistory } from "@langchain/community/stores/message/upstash_redis";// For demos, you can also use an in-memory store:// import { ChatMessageHistory } from "langchain/stores/message/in_memory";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You're an assistant who's good at {ability}"], new MessagesPlaceholder("history"), ["human", "{question}"],]);const chain = prompt.pipe( new ChatAnthropic({ model: "claude-3-sonnet-20240229" })); ### Adding message history[​](#adding-message-history "Direct link to Adding message history") To add message history to our original chain we wrap it in the `RunnableWithMessageHistory` class. Crucially, we also need to define a `getMessageHistory()` method that takes a `sessionId` string and based on it returns a `BaseChatMessageHistory`. Given the same input, this method should return an equivalent output. In this case, we'll also want to specify `inputMessagesKey` (the key to be treated as the latest input message) and `historyMessagesKey` (the key to add historical messages to). import { RunnableWithMessageHistory } from "@langchain/core/runnables";const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: (sessionId) => new UpstashRedisChatMessageHistory({ sessionId, config: { url: process.env.UPSTASH_REDIS_REST_URL!, token: process.env.UPSTASH_REDIS_REST_TOKEN!, }, }), inputMessagesKey: "question", historyMessagesKey: "history",}); Invoking with config[​](#invoking-with-config "Direct link to Invoking with config") ------------------------------------------------------------------------------------ Whenever we call our chain with message history, we need to include an additional config object that contains the `session_id` { configurable: { sessionId: "<SESSION_ID>"; }} Given the same configuration, our chain should be pulling from the same chat message history. const result = await chainWithHistory.invoke( { ability: "math", question: "What does cosine mean?", }, { configurable: { sessionId: "foobarbaz", }, });console.log(result);/* AIMessage { content: 'Cosine refers to one of the basic trigonometric functions. Specifically:\n' + '\n' + '- Cosine is one of the three main trigonometric functions, along with sine and tangent. It is often abbreviated as cos.\n' + '\n' + '- For a right triangle with sides a, b, and c (where c is the hypotenuse), cosine represents the ratio of the length of the adjacent side (a) to the length of the hypotenuse (c). So cos(A) = a/c, where A is the angle opposite side a.\n' + '\n' + '- On the Cartesian plane, cosine represents the x-coordinate of a point on the unit circle for a given angle. So if you take an angle θ on the unit circle, the cosine of θ gives you the x-coordinate of where the terminal side of that angle intersects the circle.\n' + '\n' + '- The cosine function has a periodic waveform that oscillates between 1 and -1. Its graph forms a cosine wave.\n' + '\n' + 'So in essence, cosine helps relate an angle in a right triangle to the ratio of two of its sides. Along with sine and tangent, it is foundational to trigonometry and mathematical modeling of periodic functions.', name: undefined, additional_kwargs: { id: 'msg_01QnnAkKEz7WvhJrwLWGbLBm', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null } }*/const result2 = await chainWithHistory.invoke( { ability: "math", question: "What's its inverse?", }, { configurable: { sessionId: "foobarbaz", }, });console.log(result2);/* AIMessage { content: 'The inverse of the cosine function is the arcsine or inverse sine function, often written as sin−1(x) or sin^{-1}(x).\n' + '\n' + 'Some key properties of the inverse cosine function:\n' + '\n' + '- It accepts values between -1 and 1 as inputs and returns angles from 0 to π radians (0 to 180 degrees). This is the inverse of the regular cosine function, which takes angles and returns the cosine ratio.\n' + '\n' + '- It is also called cos−1(x) or cos^{-1}(x) (read as "cosine inverse of x").\n' + '\n' + '- The notation sin−1(x) is usually preferred over cos−1(x) since it relates more directly to the unit circle definition of cosine. sin−1(x) gives the angle whose sine equals x.\n' + '\n' + '- The arcsine function is one-to-one on the domain [-1, 1]. This means every output angle maps back to exactly one input ratio x. This one-to-one mapping is what makes it the mathematical inverse of cosine.\n' + '\n' + 'So in summary, arcsine or inverse sine, written as sin−1(x) or sin^{-1}(x), gives you the angle whose cosine evaluates to the input x, undoing the cosine function. It is used throughout trigonometry and calculus.', additional_kwargs: { id: 'msg_01PYRhpoUudApdJvxug6R13W', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null } }*/ tip [Langsmith trace](https://smith.langchain.com/public/50377a89-d0b8-413b-8cd7-8e6618835e00/r) Looking at the Langsmith trace for the second call, we can see that when constructing the prompt, a "history" variable has been injected which is a list of two messages (our first input and first output). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to merge consecutive messages of the same type ](/v0.2/docs/how_to/merge_message_runs)[ Next How to migrate from legacy LangChain agents to LangGraph ](/v0.2/docs/how_to/migrate_agent) * [Setup](#setup) * [LangSmith](#langsmith) * [Adding message history](#adding-message-history) * [Invoking with config](#invoking-with-config)
null
https://js.langchain.com/v0.2/docs/how_to/multi_vector
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to generate multiple embeddings per document On this page How to generate multiple embeddings per document ================================================ Prerequisites This guide assumes familiarity with the following concepts: * [Retrievers](/v0.2/docs/concepts/#retrievers) * [Text splitters](/v0.2/docs/concepts/#text-splitters) * [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag) Embedding different representations of an original document, then returning the original document when any of the representations result in a search hit, can allow you to tune and improve your retrieval performance. LangChain has a base [`MultiVectorRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) designed to do just this! A lot of the complexity lies in how to create the multiple vectors per document. This guide covers some of the common ways to create those vectors and use the `MultiVectorRetriever`. Some methods to create multiple vectors per document include: * smaller chunks: split a document into smaller chunks, and embed those (e.g. the [`ParentDocumentRetriever`](/v0.2/docs/how_to/parent_document_retriever)) * summary: create a summary for each document, embed that along with (or instead of) the document * hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control. Smaller chunks[​](#smaller-chunks "Direct link to Smaller chunks") ------------------------------------------------------------------ Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. NOTE: this is what the ParentDocumentRetriever does. Here we show what is going on under the hood. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import * as uuid from "uuid";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { Document } from "@langchain/core/documents";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const childSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 400, chunkOverlap: 0,});const subDocs = [];for (let i = 0; i < docs.length; i += 1) { const childDocs = await childSplitter.splitDocuments([docs[i]]); const taggedChildDocs = childDocs.map((childDoc) => { // eslint-disable-next-line no-param-reassign childDoc.metadata[idKey] = docIds[i]; return childDoc; }); subDocs.push(...taggedChildDocs);}// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( subDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey, // Optional `k` parameter to search for more child documents in VectorStore. // Note that this does not exactly correspond to the number of final (parent) documents // retrieved, as multiple child documents can point to the same parent. childK: 20, // Optional `k` parameter to limit number of final, parent documents returned from this // retriever and sent to LLM. This is an upper-bound, and the final count may be lower than this. parentK: 5,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent.length);/* 390*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/ #### API Reference: * [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector` * [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Summary[​](#summary "Direct link to Summary") --------------------------------------------- Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those. import * as uuid from "uuid";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const chain = RunnableSequence.from([ { content: (doc: Document) => doc.pageContent }, PromptTemplate.fromTemplate(`Summarize the following document:\n\n{content}`), new ChatOpenAI({ maxRetries: 0, }), new StringOutputParser(),]);const summaries = await chain.batch(docs, { maxConcurrency: 5,});const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const summaryDocs = summaries.map((summary, i) => { const summaryDoc = new Document({ pageContent: summary, metadata: { [idKey]: docIds[i], }, }); return summaryDoc;});// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( summaryDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// We could also add the original chunks to the vectorstore if we wish// const taggedOriginalDocs = docs.map((doc, i) => {// doc.metadata[idKey] = docIds[i];// return doc;// });// retriever.vectorstore.addDocuments(taggedOriginalDocs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent.length);/* 1118*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector` * [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Hypothetical queries[​](#hypothetical-queries "Direct link to Hypothetical queries") ------------------------------------------------------------------------------------ An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded and used to retrieve the original document: import * as uuid from "uuid";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";import { JsonKeyOutputFunctionsParser } from "@langchain/core/output_parsers/openai_functions";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const functionsSchema = [ { name: "hypothetical_questions", description: "Generate hypothetical questions", parameters: { type: "object", properties: { questions: { type: "array", items: { type: "string", }, }, }, required: ["questions"], }, },];const functionCallingModel = new ChatOpenAI({ maxRetries: 0, model: "gpt-4",}).bind({ functions: functionsSchema, function_call: { name: "hypothetical_questions" },});const chain = RunnableSequence.from([ { content: (doc: Document) => doc.pageContent }, PromptTemplate.fromTemplate( `Generate a list of 3 hypothetical questions that the below document could be used to answer:\n\n{content}` ), functionCallingModel, new JsonKeyOutputFunctionsParser<string[]>({ attrName: "questions" }),]);const hypotheticalQuestions = await chain.batch(docs, { maxConcurrency: 5,});const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const hypotheticalQuestionDocs = hypotheticalQuestions .map((questionArray, i) => { const questionDocuments = questionArray.map((question) => { const questionDocument = new Document({ pageContent: question, metadata: { [idKey]: docIds[i], }, }); return questionDocument; }); return questionDocuments; }) .flat();// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( hypotheticalQuestionDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// We could also add the original chunks to the vectorstore if we wish// const taggedOriginalDocs = docs.map((doc, i) => {// doc.metadata[idKey] = docIds[i];// return doc;// });// retriever.vectorstore.addDocuments(taggedOriginalDocs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent);/* "What measures will be taken to crack down on corporations overcharging American businesses and consumers?"*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector` * [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [JsonKeyOutputFunctionsParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers_openai_functions.JsonKeyOutputFunctionsParser.html) from `@langchain/core/output_parsers/openai_functions` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned a few ways to generate multiple embeddings per document. Next, check out the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to migrate from legacy LangChain agents to LangGraph ](/v0.2/docs/how_to/migrate_agent)[ Next How to pass multimodal data directly to models ](/v0.2/docs/how_to/multimodal_inputs) * [Smaller chunks](#smaller-chunks) * [Summary](#summary) * [Hypothetical queries](#hypothetical-queries) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/multimodal_prompts
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use multimodal prompts How to use multimodal prompts ============================= Here we demonstrate how to use prompt templates to format multimodal inputs to models. In this example we will ask a model to describe an image. Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [LangChain Tools](/v0.2/docs/concepts/#tools) * npm * yarn * pnpm npm i axios @langchain/core @langchain/openai yarn add axios @langchain/core @langchain/openai pnpm add axios @langchain/core @langchain/openai import axios from "axios";const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg";const axiosRes = await axios.get(imageUrl, { responseType: "arraybuffer" });const base64 = btoa( new Uint8Array(axiosRes.data).reduce( (data, byte) => data + String.fromCharCode(byte), "" )); import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o" }); const prompt = ChatPromptTemplate.fromMessages([ ["system", "Describe the image provided"], [ "user", [{ type: "image_url", image_url: "data:image/jpeg;base64,{base64}" }], ],]); const chain = prompt.pipe(model); const response = await chain.invoke({ base64 });console.log(response.content); The image depicts a scenic outdoor landscape featuring a wooden boardwalk path extending forward through a large field of green grass and vegetation. On either side of the path, the grass is lush and vibrant, with a variety of bushes and low shrubs visible as well. The sky overhead is expansive and mostly clear, adorned with soft, wispy clouds, illuminated by the light giving a warm and serene ambiance. In the distant background, there are clusters of trees and additional foliage, suggesting a natural and tranquil setting, ideal for a peaceful walk or nature exploration. We can also pass in multiple images. const prompt = ChatPromptTemplate.fromMessages([ ["system", "compare the two pictures provided"], [ "user", [ { type: "image_url", image_url: "data:image/jpeg;base64,{imageData1}", }, { type: "image_url", image_url: "data:image/jpeg;base64,{imageData2}", }, ], ],]); const chain = prompt.pipe(model); const response = await chain.invoke({ imageData1: base64, imageData2: base64 });console.log(response.content); The two images provided are identical. Both show a wooden boardwalk path extending into a grassy field under a blue sky with scattered clouds. The scenery includes green shrubs and trees in the background, with a bright and clear sky above. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass multimodal data directly to models ](/v0.2/docs/how_to/multimodal_inputs)[ Next How to generate multiple queries to retrieve data for ](/v0.2/docs/how_to/multiple_queries)
null
https://js.langchain.com/v0.2/docs/how_to/qa_sources
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to return sources On this page How to return sources ===================== Prerequisites This guide assumes familiarity with the following: * [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/) Often in Q&A applications it’s important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation. We’ll be using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng for retrieval content this notebook. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Dependencies[​](#dependencies "Direct link to Dependencies") We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers). We’ll use the following packages: npm install --save langchain @langchain/openai cheerio We need to set environment variable `OPENAI_API_KEY`: export OPENAI_API_KEY=YOUR_KEY ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/). Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY Chain without sources[​](#chain-without-sources "Direct link to Chain without sources") --------------------------------------------------------------------------------------- Here is the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Quickstart](/v0.2/docs/tutorials/qa_chat_history/). import "cheerio";import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { formatDocumentsAsString } from "langchain/util/document";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = RunnableSequence.from([ { context: retriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, llm, new StringOutputParser(),]); Let’s see what this prompt actually looks like: console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n")); You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer: await ragChain.invoke("What is task decomposition?"); "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. T"... 254 more characters Adding sources[​](#adding-sources "Direct link to Adding sources") ------------------------------------------------------------------ With LCEL, we can easily pass the retrieved documents through the chain and return them in the final response: import { RunnableMap, RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const ragChainWithSources = RunnableMap.from({ // Return raw documents here for now since we want to return them at // the end - we'll format in the next step of the chain context: retriever, question: new RunnablePassthrough(),}).assign({ answer: RunnableSequence.from([ (input) => { return { // Now we format the documents as strings for the prompt context: formatDocumentsAsString(input.context), question: input.question, }; }, prompt, llm, new StringOutputParser(), ]),});await ragChainWithSources.invoke("What is Task Decomposition"); { question: "What is Task Decomposition", context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Resources:\n" + "1. Internet access for searches and information gathering.\n" + "2. Long Term memory management"... 456 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ], answer: "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps fo"... 230 more characters} Check out the [LangSmith trace](https://smith.langchain.com/public/c3753531-563c-40d4-a6bf-21bfe8741d10/r) here to see the internals of the chain. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to return sources from your QA chains. Next, check out some of the other guides around RAG, such as [how to stream responses](/v0.2/docs/how_to/qa_streaming). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to return citations ](/v0.2/docs/how_to/qa_citations)[ Next How to stream from a question-answering chain ](/v0.2/docs/how_to/qa_streaming) * [Setup](#setup) * [Dependencies](#dependencies) * [LangSmith](#langsmith) * [Chain without sources](#chain-without-sources) * [Adding sources](#adding-sources) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/multimodal_inputs
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass multimodal data directly to models On this page How to pass multimodal data directly to models ============================================== Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) Here we demonstrate how to pass multimodal input directly to models. We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision). For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format. In this example we will ask a model to describe an image. import * as fs from "node:fs/promises";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const imageData = await fs.readFile("../../../../examples/hotdog.jpg"); The most commonly supported way to pass in images is to pass it in as a byte string within a message with a complex content type for models that support multimodal input. Here’s an example: import { HumanMessage } from "@langchain/core/messages";const message = new HumanMessage({ content: [ { type: "text", text: "what does this image contain?", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ],});const response = await model.invoke([message]);console.log(response.content); This image contains a hot dog. It shows a frankfurter or sausage encased in a soft, elongated bread bun. The sausage itself appears to be reddish in color, likely a smoked or cured variety. The bun is a golden-brown color, suggesting it has been lightly toasted or grilled. The hot dog is presented against a plain white background, allowing the details of the iconic American fast food item to be clearly visible. Some model providers support taking an HTTP URL to the image directly in a content block of type `"image_url"`: import { ChatOpenAI } from "@langchain/openai";const openAIModel = new ChatOpenAI({ model: "gpt-4o",});const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg";const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: imageUrl }, }, ],});const response = await openAIModel.invoke([message]);console.log(response.content); The weather in the image appears to be pleasant and clear. The sky is mostly blue with a few scattered clouds, indicating good visibility and no immediate signs of rain. The lighting suggests it’s either morning or late afternoon, with sunlight creating a warm and bright atmosphere. There is no indication of strong winds, as the grass and foliage appear calm and undisturbed. Overall, it looks like a beautiful day, possibly spring or summer, ideal for outdoor activities. We can also pass in multiple images. const message = new HumanMessage({ content: [ { type: "text", text: "are these two images the same?", }, { type: "image_url", image_url: { url: imageUrl, }, }, { type: "image_url", image_url: { url: imageUrl, }, }, ],});const response = await openAIModel.invoke([message]);console.log(response.content); Yes, the two images are the same. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to pass multimodal data to a modal. Next, you can check out our guide on [multimodal tool calls](/v0.2/docs/how_to/tool_calls_multimodal). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to generate multiple embeddings per document ](/v0.2/docs/how_to/multi_vector)[ Next How to use multimodal prompts ](/v0.2/docs/how_to/multimodal_prompts) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/custom_chat
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to create a custom chat model class On this page How to create a custom chat model class ======================================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) This notebook goes over how to create a custom chat model wrapper, in case you want to use your own chat model or a different wrapper than one that is directly supported in LangChain. There are a few required things that a chat model needs to implement after extending the [`SimpleChatModel` class](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.SimpleChatModel.html): * A `_call` method that takes in a list of messages and call options (which includes things like `stop` sequences), and returns a string. * A `_llmType` method that returns a string. Used for logging purposes only. You can also implement the following optional method: * A `_streamResponseChunks` method that returns an `AsyncGenerator` and yields [`ChatGenerationChunks`](https://v02.api.js.langchain.com/classes/langchain_core_outputs.ChatGenerationChunk.html). This allows the LLM to support streaming outputs. Let's implement a very simple custom chat model that just echoes back the first `n` characters of the input. import { SimpleChatModel, type BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { AIMessageChunk, type BaseMessage } from "@langchain/core/messages";import { ChatGenerationChunk } from "@langchain/core/outputs";export interface CustomChatModelInput extends BaseChatModelParams { n: number;}export class CustomChatModel extends SimpleChatModel { n: number; constructor(fields: CustomChatModelInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<string> { if (!messages.length) { throw new Error("No messages provided."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } return messages[0].content.slice(0, this.n); } async *_streamResponseChunks( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<ChatGenerationChunk> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of messages[0].content.slice(0, this.n)) { yield new ChatGenerationChunk({ message: new AIMessageChunk({ content: letter, }), text: letter, }); // Trigger the appropriate callback for new chunks await runManager?.handleLLMNewToken(letter); } }} We can now use this as any other chat model: const chatModel = new CustomChatModel({ n: 4 });await chatModel.invoke([["human", "I am an LLM"]]); AIMessage { content: 'I am', additional_kwargs: {}} And support streaming: const stream = await chatModel.stream([["human", "I am an LLM"]]);for await (const chunk of stream) { console.log(chunk);} AIMessageChunk { content: 'I', additional_kwargs: {}}AIMessageChunk { content: ' ', additional_kwargs: {}}AIMessageChunk { content: 'a', additional_kwargs: {}}AIMessageChunk { content: 'm', additional_kwargs: {}} Richer outputs[​](#richer-outputs "Direct link to Richer outputs") ------------------------------------------------------------------ If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseChatModel`](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html) class and implement the lower level `_generate` method. It also takes a list of `BaseMessage`s as input, but requires you to construct and return a `ChatGeneration` object that permits additional metadata. Here's an example: import { AIMessage, BaseMessage } from "@langchain/core/messages";import { ChatResult } from "@langchain/core/outputs";import { BaseChatModel, BaseChatModelCallOptions, BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";export interface AdvancedCustomChatModelOptions extends BaseChatModelCallOptions {}export interface AdvancedCustomChatModelParams extends BaseChatModelParams { n: number;}export class AdvancedCustomChatModel extends BaseChatModel<AdvancedCustomChatModelOptions> { n: number; static lc_name(): string { return "AdvancedCustomChatModel"; } constructor(fields: AdvancedCustomChatModelParams) { super(fields); this.n = fields.n; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); const content = messages[0].content.slice(0, this.n); const tokenUsage = { usedTokens: this.n, }; return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: { tokenUsage }, }; } _llmType(): string { return "advanced_custom_chat_model"; }} This will pass the additional returned information in callback events and in the \`streamEvents method: const chatModel = new AdvancedCustomChatModel({ n: 4 });const eventStream = await chatModel.streamEvents([["human", "I am an LLM"]], { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }} { "event": "on_llm_end", "name": "AdvancedCustomChatModel", "run_id": "b500b98d-bee5-4805-9b92-532a491f5c70", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "I am", "additional_kwargs": {} } }, "text": "I am" } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }} Tracing (advanced)[​](#tracing-advanced "Direct link to Tracing (advanced)") ---------------------------------------------------------------------------- If you are implementing a custom chat model and want to use it with a tracing service like [LangSmith](https://smith.langchain.com/), you can automatically log params used for a given invocation by implementing the `invocationParams()` method on the model. This method is purely optional, but anything it returns will be logged as metadata for the trace. Here's one pattern you might use: export interface CustomChatModelOptions extends BaseChatModelCallOptions { // Some required or optional inner args tools: Record<string, any>[];}export interface CustomChatModelParams extends BaseChatModelParams { temperature: number;}export class CustomChatModel extends BaseChatModel<CustomChatModelOptions> { temperature: number; static lc_name(): string { return "CustomChatModel"; } constructor(fields: CustomChatModelParams) { super(fields); this.temperature = fields.temperature; } // Anything returned in this method will be logged as metadata in the trace. // It is common to pass it any options used to invoke the function. invocationParams(options?: this["ParsedCallOptions"]) { return { tools: options?.tools, n: this.n, }; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } const additionalParams = this.invocationParams(options); const content = await someAPIRequest(messages, additionalParams); return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: {}, }; } _llmType(): string { return "advanced_custom_chat_model"; }} * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add ad-hoc tool calling capability to LLMs and Chat Models ](/v0.2/docs/how_to/tools_prompting)[ Next How to do per-user retrieval ](/v0.2/docs/how_to/qa_per_user) * [Richer outputs](#richer-outputs) * [Tracing (advanced)](#tracing-advanced)
null
https://js.langchain.com/v0.2/docs/how_to/qa_per_user
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to do per-user retrieval On this page How to do per-user retrieval ============================ Prerequisites This guide assumes familiarity with the following: * [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/) When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see each other’s data. This means that you need to be able to configure your retrieval chain to only retrieve certain information. This generally involves two steps. **Step 1: Make sure the retriever you are using supports multiple users** At the moment, there is no unified flag or filter for this in LangChain. Rather, each vectorstore and retriever may have their own, and may be called different things (namespaces, multi-tenancy, etc). For vectorstores, this is generally exposed as a keyword argument that is passed in during `similaritySearch`. By reading the documentation or source code, figure out whether the retriever you are using supports multiple users, and, if so, how to use it. **Step 2: Add that parameter as a configurable field for the chain** The LangChain `config` object is passed through to every Runnable. Here you can add any fields you’d like to the `configurable` object. Later, inside the chain we can extract these fields. **Step 3: Call the chain with that configurable field** Now, at runtime you can call this chain with configurable field. Code Example[​](#code-example "Direct link to Code Example") ------------------------------------------------------------ Let’s see a concrete example of what this looks like in code. We will use Pinecone for this example. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core yarn add @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core pnpm add @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI and Pinecone in this example: OPENAI_API_KEY=your-api-keyPINECONE_API_KEY=your-api-keyPINECONE_INDEX=your-index-name# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";const embeddings = new OpenAIEmbeddings();const pinecone = new Pinecone();const pineconeIndex = pinecone.Index(Deno.env.get("PINECONE_INDEX"));const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });await vectorStore.addDocuments( [new Document({ pageContent: "i worked at kensho" })], { namespace: "harrison" });await vectorStore.addDocuments( [new Document({ pageContent: "i worked at facebook" })], { namespace: "ankush" }); [ "77b8f174-9d89-4c6c-b2ab-607fe3913b2d" ] The pinecone kwarg for `namespace` can be used to separate documents // This will only get documents for Ankushconst ankushRetriever = vectorStore.asRetriever({ filter: { namespace: "ankush", },});await ankushRetriever.invoke("where did i work?"); [ Document { pageContent: "i worked at facebook", metadata: {} } ] // This will only get documents for Harrisonconst harrisonRetriever = vectorStore.asRetriever({ filter: { namespace: "harrison", },});await harrisonRetriever.invoke("where did i work?"); [ Document { pageContent: "i worked at kensho", metadata: {} } ] We can now create the chain that we will use to perform question-answering. import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableBinding, RunnableLambda, RunnablePassthrough,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0,}); We can now create the chain using our configurable retriever. It is configurable because we can define any object which will be passed to the chain. From there, we extract the configurable object and pass it to the vectorstore. import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const chain = RunnableSequence.from([ RunnablePassthrough.assign({ context: async (input, config) => { if (!config || !("configurable" in config)) { throw new Error("No config"); } const { configurable } = config; const documents = await vectorStore .asRetriever(configurable) .invoke(input.question, config); return documents.map((doc) => doc.pageContent).join("\n\n"); }, }), prompt, model, new StringOutputParser(),]); We can now invoke the chain with configurable options. `search_kwargs` is the id of the configurable field. The value is the search kwargs to use for Pinecone await chain.invoke( { question: "where did the user work?" }, { configurable: { filter: { namespace: "harrison" } } }); "The user worked at Kensho." await chain.invoke( { question: "where did the user work?" }, { configurable: { filter: { namespace: "ankush" } } }); "The user worked at Facebook." For more vector store implementations that can support multiple users, please refer to specific pages, such as [Milvus](/v0.2/docs/integrations/vectorstores/milvus). Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now seen one approach for supporting retrieval with data from multiple users. Next, check out some of the other how-to guides on RAG, such as [returning sources](/v0.2/docs/how_to/qa_sources). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to create a custom chat model class ](/v0.2/docs/how_to/custom_chat)[ Next How to track token usage ](/v0.2/docs/how_to/chat_token_usage_tracking) * [Code Example](#code-example) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/community
Community navigator =================== Hi! Thanks for being here. We're lucky to have a community of so many passionate developers building with LangChain–we have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each other's work, become each other's customers and collaborators, and so much more. Whether you're new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction. * **🦜 Contribute to LangChain** * **🌍 Meetups, Events, and Hackathons** * **📣 Help Us Amplify Your Work** * **💬 Stay in the loop** 🦜 Contribute to LangChain ========================== LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is **still** so much to do together. Here are some ways to get involved: * **[Open a pull request](https://github.com/langchain-ai/langchainjs/issues):** we'd appreciate all forms of contributions–new features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, we'd love to work on it with you. * **[Read our contributor guidelines:](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions. * **Become an expert:** our experts help the community by answering product questions in Discord. If that's a role you'd like to play, we'd be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at [[email protected]](mailto:[email protected]) and we'll take it from there! * **Integrate with LangChain:** if your product integrates with LangChain–or aspires to–we want to help make sure the experience is as smooth as possible for you and end users. Send us an email at [[email protected]](mailto:[email protected]) and tell us what you're working on. * **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at [[email protected]](mailto:[email protected]) if you'd like to explore this role. 🌍 Meetups, Events, and Hackathons ================================== One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible! * **Find a meetup, hackathon, or webinar:** you can find the one for you on on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f). * **Submit an event to our calendar:** email us at [[email protected]](mailto:[email protected]) with a link to your event page! We can also help you spread the word with our local communities. * **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at [[email protected]](mailto:[email protected]) to tell us about your event! * **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If you'd like to help, send us an email to [[email protected]](mailto:[email protected]) we can share more about how it works! * **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If you'd like to do that at an event, send us an email to [[email protected]](mailto:[email protected]) with more information about yourself, what you want to talk about, and what city you're based in and we'll try to match you with an upcoming event! * **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at [[email protected]](mailto:[email protected]) and let us know how we can help. 📣 Help Us Amplify Your Work ============================ If you're working on something you're proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off. * **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), we'll almost certainly see it and can show you some love. * **Publish something on our blog:** if you're writing about your experience building with LangChain, we'd love to post (or crosspost) it on our blog! E-mail [[email protected]](mailto:[email protected]) with a draft of your post! Or even an idea for something you want to write about. * **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at [[email protected]](mailto:[email protected]). ☀️ Stay in the loop =================== Here's where our team hangs out, talks shop, spotlights cool work, and shares what we're up to. We'd love to see you there too. * **[Twitter](https://twitter.com/LangChainAI):** we post about what we're working on and what cool things we're seeing in the space. If you tag @langchainai in your post, we'll almost certainly see it, and can snow you some love! * **[Discord](https://discord.gg/6adMQxSpJS):** connect with with >30k developers who are building with LangChain * **[GitHub](https://github.com/langchain-ai/langchainjs):** open pull requests, contribute to a discussion, and/or contribute * **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
null
https://js.langchain.com/v0.2/docs/contributing
* [](/v0.2/) * Contributing * Welcome Contributors On this page Welcome Contributors ==================== Hi there! Thank you for even being interested in contributing to LangChain. As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes. 🗺️ Guidelines[​](#️-guidelines "Direct link to 🗺️ Guidelines") ---------------------------------------------------------------- ### 👩‍💻 Ways to contribute[​](#-ways-to-contribute "Direct link to 👩‍💻 Ways to contribute") There are many ways to contribute to LangChain. Here are some common ways people contribute: * [**Documentation**](/v0.2/docs/contributing/documentation/style_guide): Help improve our docs, including this one! * [**Code**](/v0.2/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure. * [**Integrations**](/v0.2/docs/contributing/integrations): Help us integrate with your favorite vendors and tools. * [**Discussions**](https://github.com/langchain-ai/langchainjs/discussions): Help answer usage questions and discuss issues with users. ### 🚩 GitHub Issues[​](#-github-issues "Direct link to 🚩 GitHub Issues") Our [issues](https://github.com/langchain-ai/langchainjs/issues) page is kept up to date with bugs, improvements, and feature requests. There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues. If you start working on an issue, please assign it to yourself. If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature. If two issues are related, or blocking, please link them rather than combining them. We will try to keep these issues as up-to-date as possible, though with the rapid rate of development in this field some may get out of date. If you notice this happening, please let us know. ### 💭 GitHub Discussions[​](#-github-discussions "Direct link to 💭 GitHub Discussions") We have a [discussions](https://github.com/langchain-ai/langchainjs/discussions) page where users can ask usage questions, discuss design decisions, and propose new features. If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing. ### 🙋 Getting Help[​](#-getting-help "Direct link to 🙋 Getting Help") Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is smooth for future contributors. In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase. If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help - we do not want these to get in the way of getting good code into the codebase. 🌟 Recognition ============== If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)! If you have a Twitter account you would like us to mention, please let us know in the PR or through another means. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Next Repository Structure ](/v0.2/docs/contributing/repo_structure) * [🗺️ Guidelines](#️-guidelines) * [👩‍💻 Ways to contribute](#-ways-to-contribute) * [🚩 GitHub Issues](#-github-issues) * [💭 GitHub Discussions](#-github-discussions) * [🙋 Getting Help](#-getting-help)
null
https://js.langchain.com/v0.2/docs/additional_resources/tutorials
On this page Tutorials ========= Below are links to tutorials and courses on LangChain.js. For written guides on common use cases for LangChain.js, check out the [tutorials](/v0.2/docs/tutorials/) and [how to](/v0.2/docs/how_to/) sections. * * * Deeplearning.ai[​](#deeplearningai "Direct link to Deeplearning.ai") -------------------------------------------------------------------- We've partnered with [Deeplearning.ai](https://deeplearning.ai) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng) on a LangChain.js short course. It covers LCEL and other building blocks you can combine to build more complex chains, as well as fundamentals around loading data for retrieval augmented generation (RAG). Try it for free below: * [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js) Scrimba interactive guides[​](#scrimba-interactive-guides "Direct link to Scrimba interactive guides") ------------------------------------------------------------------------------------------------------ [Scrimba](https://scrimba.com) is a code-learning platform that allows you to interactively edit and run code while watching a video walkthrough. We've partnered with Scrimba on course materials (called "scrims") that teach the fundamentals of building with LangChain.js - check them out below, and check back for more as they become available! ### Learn LangChain.js[​](#learn-langchainjs "Direct link to Learn LangChain.js") * [Learn LangChain.js on Scrimba](https://scrimba.com/learn/langchain) An full end-to-end course that walks through how to build a chatbot that can answer questions about a provided document. A great introduction to LangChain and a great first project for learning how to use LangChain Expression Language primitives to perform retrieval! ### LangChain Expression Language (LCEL)[​](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)") * [The basics (PromptTemplate + LLM)](https://scrimba.com/scrim/c6rD6Nt9) * [Adding an output parser](https://scrimba.com/scrim/co6ae44248eacc1abd87ae3dc) * [Attaching function calls to a model](https://scrimba.com/scrim/cof5449f5bc972f8c90be6a82) * [Composing multiple chains](https://scrimba.com/scrim/co14344c29595bfb29c41f12a) * [Retrieval chains](https://scrimba.com/scrim/co0e040d09941b4000244db46) * [Conversational retrieval chains ("Chat with Docs")](https://scrimba.com/scrim/co3ed4a9eb4c6c6d0361a507c) ### Deeper dives[​](#deeper-dives "Direct link to Deeper dives") * [Setting up a new `PromptTemplate`](https://scrimba.com/scrim/cbGwRwuV) * [Setting up `ChatOpenAI` parameters](https://scrimba.com/scrim/cEgbBBUw) * [Attaching stop sequences](https://scrimba.com/scrim/co9704e389428fe2193eb955c) Neo4j GraphAcademy[​](#neo4j-graphacademy "Direct link to Neo4j GraphAcademy") ------------------------------------------------------------------------------ [Neo4j](https://neo4j.com) has put together a hands-on, practical course that shows how to build a movie-recommending chatbot in Next.js. It covers retrieval-augmented generation (RAG), tracking history, and more. Check it out below: * [Build a Neo4j-backed Chatbot with TypeScript](https://graphacademy.neo4j.com/courses/llm-chatbot-typescript/?ref=langchainjs) LangChain.js x AI SDK[​](#langchainjs-x-ai-sdk "Direct link to LangChain.js x AI SDK") -------------------------------------------------------------------------------------- How to use LangChain.js with AI SDK and React Server Components. * [Streaming agentic data to the client](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/README.md) * [Streaming tool responses to the client](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/README.md) * * * * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). * [Deeplearning.ai](#deeplearningai) * [Scrimba interactive guides](#scrimba-interactive-guides) * [Learn LangChain.js](#learn-langchainjs) * [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel) * [Deeper dives](#deeper-dives) * [Neo4j GraphAcademy](#neo4j-graphacademy) * [LangChain.js x AI SDK](#langchainjs-x-ai-sdk)
null
https://js.langchain.com/v0.2/docs/how_to/qa_streaming
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to stream from a question-answering chain On this page How to stream from a question-answering chain ============================================= Prerequisites This guide assumes familiarity with the following: * [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/) Often in Q&A applications it’s important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation. We’ll be using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng for retrieval content this notebook. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Dependencies[​](#dependencies "Direct link to Dependencies") We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers). We’ll use the following packages: npm install --save langchain @langchain/openai cheerio We need to set environment variable `OPENAI_API_KEY`: export OPENAI_API_KEY=YOUR_KEY ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/). Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY Chain with sources[​](#chain-with-sources "Direct link to Chain with sources") ------------------------------------------------------------------------------ Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Returning sources](/v0.2/docs/how_to/qa_sources/) guide: import "cheerio";import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { formatDocumentsAsString } from "langchain/util/document";import { RunnableSequence, RunnablePassthrough, RunnableMap,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChainFromDocs = RunnableSequence.from([ RunnablePassthrough.assign({ context: (input) => formatDocumentsAsString(input.context), }), prompt, llm, new StringOutputParser(),]);let ragChainWithSource = new RunnableMap({ steps: { context: retriever, question: new RunnablePassthrough() },});ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs });await ragChainWithSource.invoke("What is Task Decomposition"); { question: "What is Task Decomposition", context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Resources:\n" + "1. Internet access for searches and information gathering.\n" + "2. Long Term memory management"... 456 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ], answer: "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps fo"... 230 more characters} Let’s see what this prompt actually looks like. You can also view it [in the LangChain prompt hub](https://smith.langchain.com/hub/rlm/rag-prompt): console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n")); You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer: Streaming final outputs[​](#streaming-final-outputs "Direct link to Streaming final outputs") --------------------------------------------------------------------------------------------- With [LCEL](/v0.2/docs/concepts#langchain-expression-language), we can stream outputs as they are generated: for await (const chunk of await ragChainWithSource.stream( "What is task decomposition?")) { console.log(chunk);} { question: "What is task decomposition?" }{ context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "(3) Task execution: Expert models execute on the specific tasks and log results.\n" + "Instruction:\n" + "\n" + "With "... 539 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ]}{ answer: "" }{ answer: "Task" }{ answer: " decomposition" }{ answer: " is" }{ answer: " a" }{ answer: " technique" }{ answer: " used" }{ answer: " to" }{ answer: " break" }{ answer: " down" }{ answer: " complex" }{ answer: " tasks" }{ answer: " into" }{ answer: " smaller" }{ answer: " and" }{ answer: " simpler" }{ answer: " steps" }{ answer: "." }{ answer: " It" }{ answer: " can" }{ answer: " be" }{ answer: " done" }{ answer: " through" }{ answer: " various" }{ answer: " methods" }{ answer: " such" }{ answer: " as" }{ answer: " using" }{ answer: " prompting" }{ answer: " techniques" }{ answer: "," }{ answer: " task" }{ answer: "-specific" }{ answer: " instructions" }{ answer: "," }{ answer: " or" }{ answer: " human" }{ answer: " inputs" }{ answer: "." }{ answer: " Another" }{ answer: " approach" }{ answer: " involves" }{ answer: " outsourcing" }{ answer: " the" }{ answer: " planning" }{ answer: " step" }{ answer: " to" }{ answer: " an" }{ answer: " external" }{ answer: " classical" }{ answer: " planner" }{ answer: "." }{ answer: "" } We can add some logic to compile our stream as it’s being returned: const output = {};let currentKey: string | null = null;for await (const chunk of await ragChainWithSource.stream( "What is task decomposition?")) { for (const key of Object.keys(chunk)) { if (output[key] === undefined) { output[key] = chunk[key]; } else { output[key] += chunk[key]; } if (key !== currentKey) { console.log(`\n\n${key}: ${JSON.stringify(chunk[key])}`); } else { console.log(chunk[key]); } currentKey = key; }} question: "What is task decomposition?"context: [{"pageContent":"Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":176,"to":181}}}},{"pageContent":"Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\nSelf-Reflection#","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":182,"to":184}}}},{"pageContent":"Agent System Overview\n \n Component One: Planning\n \n \n Task Decomposition\n \n Self-Reflection\n \n \n Component Two: Memory\n \n \n Types of Memory\n \n Maximum Inner Product Search (MIPS)\n \n \n Component Three: Tool Use\n \n Case Studies\n \n \n Scientific Discovery Agent\n \n Generative Agents Simulation\n \n Proof-of-Concept Examples\n \n \n Challenges\n \n Citation\n \n References","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":112,"to":146}}}},{"pageContent":"(3) Task execution: Expert models execute on the specific tasks and log results.\nInstruction:\n\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":277,"to":280}}}}]answer: ""Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through various methods such as using prompting techniques, task-specific instructions, or human inputs. Another approach involves outsourcing the planning step to an external classical planner. "answer" Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to stream responses from a QA chain. Next, check out some of the other how-to guides around RAG, such as [how to add chat history](/v0.2/docs/how_to/qa_chat_history_how_to). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to return sources ](/v0.2/docs/how_to/qa_sources)[ Next How to construct filters ](/v0.2/docs/how_to/query_constructing_filters) * [Setup](#setup) * [Dependencies](#dependencies) * [LangSmith](#langsmith) * [Chain with sources](#chain-with-sources) * [Streaming final outputs](#streaming-final-outputs) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/
* [](/v0.2/) * Tutorials On this page Tutorials ========= New to LangChain or to LLM app development in general? Read this material to quickly get up and running. Basics[​](#basics "Direct link to Basics") ------------------------------------------ * [Build a Simple LLM Application with LCEL](/v0.2/docs/tutorials/llm_chain) * [Build a Chatbot](/v0.2/docs/tutorials/chatbot) * [Build an Agent](/v0.2/docs/tutorials/agents) Working with external knowledge[​](#working-with-external-knowledge "Direct link to Working with external knowledge") --------------------------------------------------------------------------------------------------------------------- * [Build a Retrieval Augmented Generation (RAG) Application](/v0.2/docs/tutorials/rag) * [Build a Conversational RAG Application](/v0.2/docs/tutorials/qa_chat_history) * [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa) * [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis) * [Build a local RAG application](/v0.2/docs/tutorials/local_rag) * [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph) * [Build a PDF ingestion and Question/Answering system](/v0.2/docs/tutorials/pdf_qa/) Specialized tasks[​](#specialized-tasks "Direct link to Specialized tasks") --------------------------------------------------------------------------- * [Build an Extraction Chain](/v0.2/docs/tutorials/extraction) * [Classify text into labels](/v0.2/docs/tutorials/classification) * [Summarize text](/v0.2/docs/tutorials/summarization) LangGraph.js[​](#langgraphjs "Direct link to LangGraph.js") ----------------------------------------------------------- LangGraph.js is an extension of LangChain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. LangGraph.js documentation is currently hosted on a separate site. You can peruse [LangGraph.js tutorials here](https://langchain-ai.github.io/langgraphjs/tutorials/). LangSmith[​](#langsmith "Direct link to LangSmith") --------------------------------------------------- LangSmith allows you to closely trace, monitor and evaluate your LLM application. It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build. LangSmith documentation is hosted on a separate site. You can peruse [LangSmith tutorials here](https://docs.smith.langchain.com/tutorials/). ### Evaluation[​](#evaluation "Direct link to Evaluation") LangSmith helps you evaluate the performance of your LLM applications. The below tutorial is a great way to get started: * [Evaluate your LLM application](https://docs.smith.langchain.com/tutorials/Developers/evaluation) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Introduction ](/v0.2/docs/introduction)[ Next Build a Question Answering application over a Graph Database ](/v0.2/docs/tutorials/graph) * [Basics](#basics) * [Working with external knowledge](#working-with-external-knowledge) * [Specialized tasks](#specialized-tasks) * [LangGraph.js](#langgraphjs) * [LangSmith](#langsmith) * [Evaluation](#evaluation)
null
https://js.langchain.com/v0.2/docs/how_to/multiple_queries
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to generate multiple queries to retrieve data for On this page How to generate multiple queries to retrieve data for ===================================================== Prerequisites This guide assumes familiarity with the following concepts: * [Vector stores](/v0.2/docs/concepts/#vectorstores) * [Retrievers](/v0.2/docs/concepts/#retrievers) * [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag) Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on “distance”. But retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. The [`MultiQueryRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can help overcome some of the limitations of the distance-based retrieval and get a richer set of results. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic @langchain/cohere yarn add @langchain/anthropic @langchain/cohere pnpm add @langchain/anthropic @langchain/cohere import { MemoryVectorStore } from "langchain/vectorstores/memory";import { CohereEmbeddings } from "@langchain/cohere";import { MultiQueryRetriever } from "langchain/retrievers/multi_query";import { ChatAnthropic } from "@langchain/anthropic";const embeddings = new CohereEmbeddings();const vectorstore = await MemoryVectorStore.fromTexts( [ "Buildings are made out of brick", "Buildings are made out of wood", "Buildings are made out of stone", "Cars are made out of metal", "Cars are made out of plastic", "mitochondria is the powerhouse of the cell", "mitochondria is made of lipids", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const retriever = MultiQueryRetriever.fromLLM({ llm: model, retriever: vectorstore.asRetriever(),});const query = "What are mitochondria made of?";const retrievedDocs = await retriever.invoke(query);/* Generated queries: What are the components of mitochondria?,What substances comprise the mitochondria organelle? ,What is the molecular composition of mitochondria?*/console.log(retrievedDocs); [ Document { pageContent: "mitochondria is made of lipids", metadata: {} }, Document { pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }, Document { pageContent: "Buildings are made out of brick", metadata: { id: 1 } }, Document { pageContent: "Buildings are made out of wood", metadata: { id: 2 } }] Customization[​](#customization "Direct link to Customization") --------------------------------------------------------------- You can also supply a custom prompt to tune what types of questions are generated. You can also pass a custom output parser to parse and split the results of the LLM call into a list of queries. import { LLMChain } from "langchain/chains";import { pull } from "langchain/hub";import { BaseOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";type LineList = { lines: string[];};class LineListOutputParser extends BaseOutputParser<LineList> { static lc_name() { return "LineListOutputParser"; } lc_namespace = ["langchain", "retrievers", "multiquery"]; async parse(text: string): Promise<LineList> { const startKeyIndex = text.indexOf("<questions>"); const endKeyIndex = text.indexOf("</questions>"); const questionsStartIndex = startKeyIndex === -1 ? 0 : startKeyIndex + "<questions>".length; const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex; const lines = text .slice(questionsStartIndex, questionsEndIndex) .trim() .split("\n") .filter((line) => line.trim() !== ""); return { lines }; } getFormatInstructions(): string { throw new Error("Not implemented."); }}// Default prompt is available at: https://smith.langchain.com/hub/jacob/multi-vector-retriever-germanconst prompt: PromptTemplate = await pull( "jacob/multi-vector-retriever-german");const vectorstore = await MemoryVectorStore.fromTexts( [ "Gebäude werden aus Ziegelsteinen hergestellt", "Gebäude werden aus Holz hergestellt", "Gebäude werden aus Stein hergestellt", "Autos werden aus Metall hergestellt", "Autos werden aus Kunststoff hergestellt", "Mitochondrien sind die Energiekraftwerke der Zelle", "Mitochondrien bestehen aus Lipiden", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings);const model = new ChatAnthropic({});const llmChain = new LLMChain({ llm: model, prompt, outputParser: new LineListOutputParser(),});const retriever = new MultiQueryRetriever({ retriever: vectorstore.asRetriever(), llmChain,});const query = "What are mitochondria made of?";const retrievedDocs = await retriever.invoke(query);/* Generated queries: Was besteht ein Mitochondrium?,Aus welchen Komponenten setzt sich ein Mitochondrium zusammen? ,Welche Moleküle finden sich in einem Mitochondrium?*/console.log(retrievedDocs); [ Document { pageContent: "Mitochondrien bestehen aus Lipiden", metadata: {} }, Document { pageContent: "Mitochondrien sind die Energiekraftwerke der Zelle", metadata: {} }, Document { pageContent: "Gebäude werden aus Stein hergestellt", metadata: { id: 3 } }, Document { pageContent: "Autos werden aus Metall hergestellt", metadata: { id: 4 } }, Document { pageContent: "Gebäude werden aus Holz hergestellt", metadata: { id: 2 } }, Document { pageContent: "Gebäude werden aus Ziegelsteinen hergestellt", metadata: { id: 1 } }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to use the `MultiQueryRetriever` to query a vector store with automatically generated queries. See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use multimodal prompts ](/v0.2/docs/how_to/multimodal_prompts)[ Next How to try to fix errors in output parsing ](/v0.2/docs/how_to/output_parser_fixing) * [Get started](#get-started) * [Customization](#customization) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to track token usage On this page How to track token usage ======================== Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) This notebook goes over how to track your token usage for specific calls. Using `AIMessage.usage_metadata`[​](#using-aimessageusage_metadata "Direct link to using-aimessageusage_metadata") ------------------------------------------------------------------------------------------------------------------ A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model. LangChain `AIMessage` objects include a [`usage_metadata`](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html#usage_metadata) attribute for supported providers. When populated, this attribute will be an object with standard keys (e.g., "input\_tokens" and "output\_tokens"). #### OpenAI[​](#openai "Direct link to OpenAI") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-3.5-turbo-0125",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.usage_metadata);/* { input_tokens: 12, output_tokens: 17, total_tokens: 29 }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` #### Anthropic[​](#anthropic "Direct link to Anthropic") * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ model: "claude-3-haiku-20240307",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.usage_metadata);/* { input_tokens: 12, output_tokens: 98, total_tokens: 110 }*/ #### API Reference: * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Using `AIMessage.response_metadata`[​](#using-aimessageresponse_metadata "Direct link to using-aimessageresponse_metadata") --------------------------------------------------------------------------------------------------------------------------- A number of model providers return token usage information as part of the chat generation response. When available, this is included in the `AIMessage.response_metadata` field. #### OpenAI[​](#openai-1 "Direct link to OpenAI") import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { tokenUsage: { completionTokens: 15, promptTokens: 12, totalTokens: 27 }, finish_reason: 'stop' }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` #### Anthropic[​](#anthropic-1 "Direct link to Anthropic") import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { id: 'msg_017Mgz6HdgNbi3cwL1LNB9Dw', model: 'claude-3-sonnet-20240229', stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: 'end_turn' }*/ #### API Reference: * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- Some providers support token count metadata in a streaming context. #### OpenAI[​](#openai-2 "Direct link to OpenAI") For example, OpenAI will return a message chunk at the end of a stream with token usage information. This behavior is supported by `@langchain/openai` >= 0.1.0 and can be enabled by passing a `stream_options` parameter when making your call. info By default, the last message chunk in a stream will include a `finish_reason` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `finish_reason` appears on the second to last message chunk. import type { AIMessageChunk } from "@langchain/core/messages";import { ChatOpenAI } from "@langchain/openai";import { concat } from "@langchain/core/utils/stream";// Instantiate the modelconst model = new ChatOpenAI();const response = await model.stream("Hello, how are you?", { // Pass the stream options stream_options: { include_usage: true, },});// Iterate over the response, only saving the last chunklet finalResult: AIMessageChunk | undefined;for await (const chunk of response) { if (finalResult) { finalResult = concat(finalResult, chunk); } else { finalResult = chunk; }}console.log(finalResult?.usage_metadata);/* { input_tokens: 13, output_tokens: 30, total_tokens: 43 }*/ #### API Reference: * [AIMessageChunk](https://v02.api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html) from `@langchain/core/messages` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [concat](https://v02.api.js.langchain.com/functions/langchain_core_utils_stream.concat.html) from `@langchain/core/utils/stream` Using callbacks[​](#using-callbacks "Direct link to Using callbacks") --------------------------------------------------------------------- You can also use the `handleLLMEnd` callback to get the full output from the LLM, including token usage for supported models. Here's an example of how you could do that: import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo", callbacks: [ { handleLLMEnd(output) { console.log(JSON.stringify(output, null, 2)); }, }, ],});await chatModel.invoke("Tell me a joke.");/* { "generations": [ [ { "text": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "tool_calls": [], "invalid_tool_calls": [], "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 }, "finish_reason": "stop" } } }, "generationInfo": { "finish_reason": "stop" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 } } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now seen a few examples of how to track chat model token usage for supported providers. Next, check out the other how-to guides on chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to add caching to your chat models](/v0.2/docs/how_to/chat_model_caching). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to do per-user retrieval ](/v0.2/docs/how_to/qa_per_user)[ Next How to track token usage ](/v0.2/docs/how_to/llm_token_usage_tracking) * [Using `AIMessage.usage_metadata`](#using-aimessageusage_metadata) * [Using `AIMessage.response_metadata`](#using-aimessageresponse_metadata) * [Streaming](#streaming) * [Using callbacks](#using-callbacks) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/llm_token_usage_tracking
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to track token usage On this page How to track token usage ======================== Prerequisites This guide assumes familiarity with the following concepts: * [LLMs](/v0.2/docs/concepts/#llms) This notebook goes over how to track your token usage for specific LLM calls. This is only implemented by some providers, including OpenAI. Here's an example of tracking token usage for a single LLM call via a callback: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";const llm = new OpenAI({ model: "gpt-3.5-turbo-instruct", callbacks: [ { handleLLMEnd(output) { console.log(JSON.stringify(output, null, 2)); }, }, ],});await llm.invoke("Tell me a joke.");/* { "generations": [ [ { "text": "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything.", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 14, "promptTokens": 5, "totalTokens": 19 } } }*/ #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` If this model is passed to a chain or agent that calls it multiple times, it will log an output each time. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now seen how to get token usage for supported LLM providers. Next, check out the other how-to guides in this section, like [how to implement your own custom LLM](/v0.2/docs/how_to/custom_llm). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to track token usage ](/v0.2/docs/how_to/chat_token_usage_tracking)[ Next How to pass through arguments from one step to the next ](/v0.2/docs/how_to/passthrough) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/query_constructing_filters
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to construct filters On this page How to construct filters ======================== Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Zod schema. There is then the issue of converting that Zod schema into a filter that can be passed into a retriever. This can be done manually, but LangChain also provides some “Translators” that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") * npm * yarn * pnpm npm i zod yarn add zod pnpm add zod In this example, `year` and `author` are both attributes to filter on. import { z } from "zod";const searchSchema = z.object({ query: z.string(), startYear: z.number().optional(), author: z.string().optional(),});const searchQuery: z.infer<typeof searchSchema> = { query: "RAG", startYear: 2022, author: "LangChain",}; import { Comparison, Comparator } from "langchain/chains/query_constructor/ir";function constructComparisons( query: z.infer<typeof searchSchema>): Comparison[] { const comparisons: Comparison[] = []; if (query.startYear !== undefined) { comparisons.push( new Comparison("gt" as Comparator, "start_year", query.startYear) ); } if (query.author !== undefined) { comparisons.push( new Comparison("eq" as Comparator, "author", query.author) ); } return comparisons;}const comparisons = constructComparisons(searchQuery); import { Operation, Operator } from "langchain/chains/query_constructor/ir";const _filter = new Operation("and" as Operator, comparisons); import { ChromaTranslator } from "langchain/retrievers/self_query/chroma";new ChromaTranslator().visitOperation(_filter); { "$and": [ { start_year: { "$gt": 2022 } }, { author: { "$eq": "LangChain" } } ]} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to create a specific filter from an arbitrary query. Next, check out some of the other query analysis guides in this section, like [how to use few-shotting to improve performance](/v0.2/docs/how_to/query_no_queries). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to stream from a question-answering chain ](/v0.2/docs/how_to/qa_streaming)[ Next How to add examples to the prompt ](/v0.2/docs/how_to/query_few_shot) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/output_parser_fixing
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to try to fix errors in output parsing How to try to fix errors in output parsing ========================================== Prerequisites This guide assumes familiarity with the following concepts: - [Chat models](/v0.2/docs/concepts/#chat-models) - [Output parsers](/v0.2/docs/concepts/#output-parsers) - [Prompt templates](/v0.2/docs/concepts/#prompt-templates) - [Chaining runnables together](/v0.2/docs/how_to/sequence/) LLMs aren’t perfect, and sometimes fail to produce output that perfectly matches a the desired format. To help handle errors, we can use the [`OutputFixingParser`](https://api.js.langchain.com/classes/langchain_output_parsers.OutputFixingParser.html) This output parser wraps another output parser, and in the event that the first one fails, it calls out to another LLM in an attempt to fix any errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it. For this example, we’ll use the [`StructuredOutputParser`](https://api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html), which can validate output according to a Zod schema. Here’s what happens if we pass it a result that does not comply with the schema: import { z } from "zod";import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const zodSchema = z.object({ name: z.string().describe("name of an actor"), film_names: z .array(z.string()) .describe("list of names of films they starred in"),});const parser = StructuredOutputParser.fromZodSchema(zodSchema);const misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}";await parser.parse(misformatted); Error: Failed to parse. Text: "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}". Error: SyntaxError: Expected property name or '}' in JSON at position 1 (line 1 column 2) Now we can construct and use a `OutputFixingParser`. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes. import { ChatAnthropic } from "@langchain/anthropic";import { OutputFixingParser } from "langchain/output_parsers";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", maxTokens: 512, temperature: 0.1,});const parserWithFix = OutputFixingParser.fromLLM(model, parser);await parserWithFix.parse(misformatted); { name: "Tom Hanks", film_names: [ "Forrest Gump", "Saving Private Ryan", "Cast Away", "Catch Me If You Can" ]} For more about different parameters and options, check out our [API reference docs](https://api.js.langchain.com/classes/langchain_output_parsers.OutputFixingParser.html). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to generate multiple queries to retrieve data for ](/v0.2/docs/how_to/multiple_queries)[ Next How to parse JSON output ](/v0.2/docs/how_to/output_parser_json)
null
https://js.langchain.com/v0.2/docs/how_to/passthrough
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to pass through arguments from one step to the next On this page How to pass through arguments from one step to the next ======================================================= Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Calling runnables in parallel](/v0.2/docs/how_to/parallel/) * [Custom functions](/v0.2/docs/how_to/functions/) When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/v0.2/docs/how_to/parallel/) to pass data through to a later step in your constructed chains. Let’s look at an example: import { RunnableParallel, RunnablePassthrough,} from "@langchain/core/runnables";const runnable = RunnableParallel.from({ passed: new RunnablePassthrough(), modified: (input) => input.num + 1,});await runnable.invoke({ num: 1 }); { passed: { num: 1 }, modified: 2 } As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`. Retrieval Example[​](#retrieval-example "Direct link to Retrieval Example") --------------------------------------------------------------------------- In the example below, we see a more real-world use case where we use `RunnablePassthrough` along with `RunnableParallel` in a chain to properly format inputs to a prompt: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "harrison worked at kensho", metadata: {} }], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-4o" });const retrievalChain = RunnableSequence.from([ { context: retriever.pipe((docs) => docs[0].pageContent), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);await retrievalChain.invoke("where did harrison work?"); "Harrison worked at Kensho." Here the input to prompt is expected to be a map with keys `"context"` and `"question"`. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the `"question"` key. The `RunnablePassthrough` allows us to pass on the user’s question to the prompt and model. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you’ve learned how to pass data through your chains to help to help format the data flowing through your chains. To learn more, see the other how-to guides on runnables in this section. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to track token usage ](/v0.2/docs/how_to/llm_token_usage_tracking)[ Next How to compose prompts together ](/v0.2/docs/how_to/prompts_composition) * [Retrieval Example](#retrieval-example) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/output_parser_json
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to parse JSON output On this page How to parse JSON output ======================== While some model providers support [built-in ways to return structured output](/v0.2/docs/how_to/structured_output), not all do. We can use an output parser to help users to specify an arbitrary JSON schema via the prompt, query a model for outputs that conform to that schema, and finally parse that schema as JSON. note Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. Prerequisites This guide assumes familiarity with the following concepts: * [Chat models](/v0.2/docs/concepts/#chat-models) * [Output parsers](/v0.2/docs/concepts/#output-parsers) * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Structured output](/v0.2/docs/how_to/structured_output) * [Chaining runnables together](/v0.2/docs/how_to/sequence/) The [`JsonOutputParser`](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) is one built-in option for prompting for and then parsing JSON output. ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0,});import { JsonOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";// Define your desired data structure. Only used for typing the parser output.interface Joke { setup: string; punchline: string;}// A query and format instructions used to prompt a language model.const jokeQuery = "Tell me a joke.";const formatInstructions = "Respond with a valid JSON object, containing two fields: 'setup' and 'punchline'.";// Set up a parser + inject instructions into the prompt template.const parser = new JsonOutputParser<Joke>();const prompt = ChatPromptTemplate.fromTemplate( "Answer the user query.\n{format_instructions}\n{query}\n");const partialedPrompt = await prompt.partial({ format_instructions: formatInstructions,});const chain = partialedPrompt.pipe(model).pipe(parser);await chain.invoke({ query: jokeQuery }); { setup: "Why don't scientists trust atoms?", punchline: "Because they make up everything!"} Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- The `JsonOutputParser` also supports streaming partial chunks. This is useful when the model returns partial JSON output in multiple chunks. The parser will keep track of the partial chunks and return the final JSON output when the model finishes generating the output. for await (const s of await chain.stream({ query: jokeQuery })) { console.log(s);} {}{ setup: "" }{ setup: "Why" }{ setup: "Why don't" }{ setup: "Why don't scientists" }{ setup: "Why don't scientists trust" }{ setup: "Why don't scientists trust atoms" }{ setup: "Why don't scientists trust atoms?", punchline: "" }{ setup: "Why don't scientists trust atoms?", punchline: "Because" }{ setup: "Why don't scientists trust atoms?", punchline: "Because they"}{ setup: "Why don't scientists trust atoms?", punchline: "Because they make"}{ setup: "Why don't scientists trust atoms?", punchline: "Because they make up"}{ setup: "Why don't scientists trust atoms?", punchline: "Because they make up everything"}{ setup: "Why don't scientists trust atoms?", punchline: "Because they make up everything!"} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned one way to prompt a model to return structured JSON. Next, check out the [broader guide on obtaining structured output](/v0.2/docs/how_to/structured_output) for other techniques. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to try to fix errors in output parsing ](/v0.2/docs/how_to/output_parser_fixing)[ Next How to parse XML output ](/v0.2/docs/how_to/output_parser_xml) * [Streaming](#streaming) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/query_few_shot
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add examples to the prompt On this page How to add examples to the prompt ================================= Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM. Let’s take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [query analysis tutorial](/v0.2/docs/tutorials/query_analysis). Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i zod uuid yarn add zod uuid pnpm add zod uuid ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") # Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Query schema[​](#query-schema "Direct link to Query schema") ------------------------------------------------------------ We’ll define a query schema that we want our model to output. To make our query analysis a bit more interesting, we’ll add a `subQueries` field that contains more narrow questions derived from the top level question. import { z } from "zod";const subQueriesDescription = `If the original question contains multiple distinct sub-questions,or if there are more generic questions that would be helpful to answer inorder to answer the original question, write a list of all relevant sub-questions.Make sure this list is comprehensive and covers all parts of the original question.It's ok if there's redundancy in the sub-questions, it's better to cover all the bases than to miss some.Make sure the sub-questions are as narrowly focused as possible in order to get the most relevant results.`;const searchSchema = z.object({ query: z .string() .describe("Primary similarity search query applied to video transcripts."), subQueries: z.array(z.string()).optional().describe(subQueriesDescription), publishYear: z.number().optional().describe("Year video was published"),}); Query generation[​](#query-generation "Direct link to Query generation") ------------------------------------------------------------------------ ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a list of database queries optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["placeholder", "{examples}"], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); Let’s try out our query analyzer without any examples in the prompt: await queryAnalyzer.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?"); { query: "difference between Web Voyager and Reflection Agents", subQueries: [ "Do Web Voyager and Reflection Agents use LangGraph?" ]} Adding examples and tuning the prompt[​](#adding-examples-and-tuning-the-prompt "Direct link to Adding examples and tuning the prompt") --------------------------------------------------------------------------------------------------------------------------------------- This works pretty well, but we probably want it to decompose the question even further to separate the queries about Web Voyager and Reflection Agents. To tune our query generation results, we can add some examples of inputs questions and gold standard output queries to our prompt. const examples = []; const question = "What's chat langchain, is it a langchain template?";const query = { query: "What is chat langchain and is it a langchain template?", subQueries: ["What is chat langchain", "What is a langchain template"],};examples.push({ input: question, toolCalls: [query] }); 1 const question = "How to build multi-agent system and stream intermediate steps from it";const query = { query: "How to build multi-agent system and stream intermediate steps from it", subQueries: [ "How to build multi-agent system", "How to stream intermediate steps from multi-agent system", "How to stream intermediate steps", ],};examples.push({ input: question, toolCalls: [query] }); 2 const question = "LangChain agents vs LangGraph?";const query = { query: "What's the difference between LangChain agents and LangGraph? How do you deploy them?", subQueries: [ "What are LangChain agents", "What is LangGraph", "How do you deploy LangChain agents", "How do you deploy LangGraph", ],};examples.push({ input: question, toolCalls: [query] }); 3 Now we need to update our prompt template and chain so that the examples are included in each prompt. Since we’re working with LLM model function-calling, we’ll need to do a bit of extra structuring to send example inputs and outputs to the model. We’ll create a `toolExampleToMessages` helper function to handle this for us: import { AIMessage, BaseMessage, HumanMessage, SystemMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuidV4 } from "uuid";const toolExampleToMessages = ( example: Example | Record<string, any>): Array<BaseMessage> => { const messages: Array<BaseMessage> = [ new HumanMessage({ content: example.input }), ]; const openaiToolCalls = example.toolCalls.map((toolCall) => { return { id: uuidV4(), type: "function" as const, function: { name: "search", arguments: JSON.stringify(toolCall), }, }; }); messages.push( new AIMessage({ content: "", additional_kwargs: { tool_calls: openaiToolCalls }, }) ); const toolOutputs = "toolOutputs" in example ? example.toolOutputs : Array(openaiToolCalls.length).fill( "You have correctly called this tool." ); toolOutputs.forEach((output, index) => { messages.push( new ToolMessage({ content: output, tool_call_id: openaiToolCalls[index].id, }) ); }); return messages;};const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat(); import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";const queryAnalyzerWithExamples = RunnableSequence.from([ { question: new RunnablePassthrough(), examples: () => exampleMessages, }, prompt, llmWithTools,]); await queryAnalyzerWithExamples.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?"); { query: "Difference between Web Voyager and Reflection agents, do they both use LangGraph?", subQueries: [ "Difference between Web Voyager and Reflection agents", "Do Web Voyager and Reflection agents use LangGraph" ]} Thanks to our examples we get a slightly more decomposed search query. With some more prompt engineering and tuning of our examples we could improve query generation even more. You can see that the examples are passed to the model as messages in the [LangSmith trace](https://smith.langchain.com/public/102829c3-69fc-4cb7-b28b-399ae2c9c008/r). Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned some techniques for combining few-shotting with query analysis. Next, check out some of the other query analysis guides in this section, like [how to deal with high cardinality data](/v0.2/docs/how_to/query_high_cardinality). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to construct filters ](/v0.2/docs/how_to/query_constructing_filters)[ Next How to deal with high cardinality categorical variables ](/v0.2/docs/how_to/query_high_cardinality) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Query schema](#query-schema) * [Query generation](#query-generation) * [Adding examples and tuning the prompt](#adding-examples-and-tuning-the-prompt) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/query_high_cardinality
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to deal with high cardinality categorical variables On this page How to deal with high cardinality categorical variables ======================================================= Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) High cardinality data refers to columns in a dataset that contain a large number of unique values. This guide demonstrates some techniques for dealing with these inputs. For example, you may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to. In this notebook we take a look at how to approach this. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community zod @faker-js/faker yarn add @langchain/community zod @faker-js/faker pnpm add @langchain/community zod @faker-js/faker ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") # Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true #### Set up data[​](#set-up-data "Direct link to Set up data") We will generate a bunch of fake names import { faker } from "@faker-js/faker";const names = Array.from({ length: 10000 }, () => faker.person.fullName()); Let’s look at some of the names names[0]; "Rolando Wilkinson" names[567]; "Homer Harber" Query Analysis[​](#query-analysis "Direct link to Query Analysis") ------------------------------------------------------------------ We can now set up a baseline query analysis import { z } from "zod";const searchSchema = z.object({ query: z.string(), author: z.string(),}); ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `Generate a relevant search query for a library system`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); We can see that if we spell the name exactly correctly, it knows how to handle it await queryAnalyzer.invoke("what are books about aliens by Jesse Knight"); { query: "aliens", author: "Jesse Knight" } The issue is that the values you want to filter on may NOT be spelled exactly correctly await queryAnalyzer.invoke("what are books about aliens by jess knight"); { query: "books about aliens", author: "jess knight" } ### Add in all values[​](#add-in-all-values "Direct link to Add in all values") One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction const system = `Generate a relevant search query for a library system using the 'search' tool.The 'author' you return to the user MUST be one of the following authors:{authors}Do NOT hallucinate author name!`;const basePrompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const prompt = await basePrompt.partial({ authors: names.join(", ") });const queryAnalyzerAll = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); However… if the list of categoricals is long enough, it may error! try { const res = await queryAnalyzerAll.invoke( "what are books about aliens by jess knight" );} catch (e) { console.error(e);} Error: 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens (50167 in the messages, 30 in the functions). Please reduce the length of the messages or functions. at Function.generate (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/error.mjs:41:20) at OpenAI.makeStatusError (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:256:25) at OpenAI.makeRequest (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:299:30) at eventLoopTick (ext:core/01_core.js:63:7) at async file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/@langchain/openai/0.0.31/dist/chat_models.js:756:29 at async RetryOperation._fn (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/p-retry/4.6.2/index.js:50:12) { status: 400, headers: { "alt-svc": 'h3=":443"; ma=86400', "cf-cache-status": "DYNAMIC", "cf-ray": "885f794b3df4fa52-SJC", "content-length": "340", "content-type": "application/json", date: "Sat, 18 May 2024 23:02:16 GMT", "openai-organization": "langchain", "openai-processing-ms": "230", "openai-version": "2020-10-01", server: "cloudflare", "set-cookie": "_cfuvid=F_c9lnRuQDUhKiUE2eR2PlsxHPldf1OAVMonLlHTjzM-1716073336256-0.0.1.1-604800000; path=/; domain="... 48 more characters, "strict-transport-security": "max-age=15724800; includeSubDomains", "x-ratelimit-limit-requests": "10000", "x-ratelimit-limit-tokens": "2000000", "x-ratelimit-remaining-requests": "9999", "x-ratelimit-remaining-tokens": "1958402", "x-ratelimit-reset-requests": "6ms", "x-ratelimit-reset-tokens": "1.247s", "x-request-id": "req_7b88677d6883fac1520e44543f68c839" }, request_id: "req_7b88677d6883fac1520e44543f68c839", error: { message: "This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens"... 101 more characters, type: "invalid_request_error", param: "messages", code: "context_length_exceeded" }, code: "context_length_exceeded", param: "messages", type: "invalid_request_error", attemptNumber: 1, retriesLeft: 6} We can try to use a longer context window… but with so much information in there, it is not guaranteed to pick it up reliably ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llmLong = new ChatOpenAI({ model: "gpt-4-turbo-preview" }); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llmLong = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llmLong = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llmLong = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llmLong = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llmLong = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); const structuredLlmLong = llmLong.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzerAll = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, structuredLlmLong,]); await queryAnalyzerAll.invoke("what are books about aliens by jess knight"); { query: "aliens", author: "jess knight" } ### Find and all relevant values[​](#find-and-all-relevant-values "Direct link to Find and all relevant values") Instead, what we can do is create a [vector store index](/v0.2/docs/concepts#vectorstores) over the relevant values and then query that for the N most relevant values, import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorstore = await MemoryVectorStore.fromTexts(names, {}, embeddings);const selectNames = async (question: string) => { const _docs = await vectorstore.similaritySearch(question, 10); const _names = _docs.map((d) => d.pageContent); return _names.join(", ");};const createPrompt = RunnableSequence.from([ { question: new RunnablePassthrough(), authors: selectNames, }, basePrompt,]);await createPrompt.invoke("what are books by jess knight"); ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what are books by jess knight", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what are books by jess knight", name: undefined, additional_kwargs: {}, response_metadata: {} } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what are books by jess knight", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what are books by jess knight", name: undefined, additional_kwargs: {}, response_metadata: {} } ]} const queryAnalyzerSelect = createPrompt.pipe(llmWithTools);await queryAnalyzerSelect.invoke("what are books about aliens by jess knight"); { query: "aliens", author: "Jess Knight" } Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to deal with high cardinality data when constructing queries. Next, check out some of the other query analysis guides in this section, like [how to use few-shotting to improve performance](/v0.2/docs/how_to/query_no_queries). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add examples to the prompt ](/v0.2/docs/how_to/query_few_shot)[ Next How to handle multiple queries ](/v0.2/docs/how_to/query_multiple_queries) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Query Analysis](#query-analysis) * [Add in all values](#add-in-all-values) * [Find and all relevant values](#find-and-all-relevant-values) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/graph
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build a Question Answering application over a Graph Database On this page Build a Question Answering application over a Graph Database ============================================================ In this guide we’ll go over the basic ways to create a Q&A chain over a graph database. These systems will allow us to ask a question about the data in a graph database and get back a natural language answer. ⚠️ Security note ⚠️[​](#security-note "Direct link to ⚠️ Security note ⚠️") --------------------------------------------------------------------------- Building Q&A systems of graph databases requires executing model-generated graph queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent’s needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, [see here](/v0.2/docs/security). Architecture[​](#architecture "Direct link to Architecture") ------------------------------------------------------------ At a high-level, the steps of most graph chains are: 1. **Convert question to a graph database query**: Model converts user input to a graph database query (e.g. Cypher). 2. **Execute graph database query**: Execute the graph database query. 3. **Answer the question**: Model responds to user input using the query results. ![sql_usecase.png](/v0.2/assets/images/graph_usecase-34d891523e6284bb6230b38c5f8392e5.png) Setup[​](#setup "Direct link to Setup") --------------------------------------- #### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i langchain @langchain/community @langchain/openai neo4j-driver yarn add langchain @langchain/community @langchain/openai neo4j-driver pnpm add langchain @langchain/community @langchain/openai neo4j-driver #### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") We’ll use OpenAI in this example: OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database. NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password" The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors. import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery); Schema refreshed successfully. [] Graph schema[​](#graph-schema "Direct link to Graph schema") ------------------------------------------------------------ In order for an LLM to be able to generate a Cypher statement, it needs information about the graph schema. When you instantiate a graph object, it retrieves the information about the graph schema. If you later make any changes to the graph, you can run the `refreshSchema` method to refresh the schema information. await graph.refreshSchema();console.log(graph.getSchema()); Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie) Great! We’ve got a graph database that we can query. Now let’s try hooking it up to an LLM. Chain[​](#chain "Direct link to Chain") --------------------------------------- Let’s use a simple chain that takes a question, turns it into a Cypher query, executes the query, and uses the result to answer the original question. ![graph_chain.webp](/v0.2/assets/images/graph_chain-6379941793e0fa985e51e4bda0329403.webp) LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](https://python.langchain.com/docs/use_cases/graph/graph_cypher_qa) import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const chain = GraphCypherQAChain.fromLLM({ llm, graph,});const response = await chain.invoke({ query: "What was the cast of the Casino?",});console.log(response); { result: "James Woods, Joe Pesci, Robert De Niro, Sharon Stone" } ### Next steps[​](#next-steps "Direct link to Next steps") For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out: * [Prompting strategies](/v0.2/docs/how_to/graph_prompting): Advanced prompt engineering techniques. * [Mapping values](/v0.2/docs/how_to/graph_mapping/): Techniques for mapping values from questions to database. * [Semantic layer](/v0.2/docs/how_to/graph_semantic): Techniques for working implementing semantic layers. * [Constructing graphs](/v0.2/docs/how_to/graph_constructing): Techniques for constructing knowledge graphs. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tutorials ](/v0.2/docs/tutorials/)[ Next Tutorials ](/v0.2/docs/tutorials/) * [⚠️ Security note ⚠️](#security-note) * [Architecture](#architecture) * [Setup](#setup) * [Graph schema](#graph-schema) * [Chain](#chain) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/query_multiple_queries
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle multiple queries On this page How to handle multiple queries ============================== Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community @langchain/openai zod chromadb yarn add @langchain/community @langchain/openai zod chromadb pnpm add @langchain/community @langchain/openai zod chromadb ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true ### Create Index[​](#create-index "Direct link to Create Index") We will create a vectorstore over fake information. import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho", "Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "multi_query",});const retriever = vectorstore.asRetriever(1); Query analysis[​](#query-analysis "Direct link to Query analysis") ------------------------------------------------------------------ We will use function calling to structure the output. We will let it return multiple queries. import { z } from "zod";const searchSchema = z .object({ queries: z.array(z.string()).describe("Distinct queries to search for"), }) .describe("Search over a database of job records."); ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.If you need to look up two distinct pieces of information, you are allowed to do that!`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); We can see that this allows for creating multiple queries await queryAnalyzer.invoke("where did Harrison Work"); { queries: [ "Harrison" ] } await queryAnalyzer.invoke("where did Harrison and ankush Work"); { queries: [ "Harrison work", "Ankush work" ] } Retrieval with query analysis[​](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis") --------------------------------------------------------------------------------------------------------------- So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time. import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const docs = []; for (const query of response.queries) { const newDocs = await retriever.invoke(query, config); docs.push(...newDocs); } // You probably want to think about reranking or deduplicating documents here // But that is a separate topic return docs;};const customChain = new RunnableLambda({ func: chain }); await customChain.invoke("where did Harrison Work"); [ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ] await customChain.invoke("where did Harrison and ankush Work"); [ Document { pageContent: "Harrison worked at Kensho", metadata: {} }, Document { pageContent: "Ankush worked at Facebook", metadata: {} }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned some techniques for handling multiple queries in a query analysis system. Next, check out some of the other query analysis guides in this section, like [how to deal with cases where no query is generated](/v0.2/docs/how_to/query_no_queries). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to deal with high cardinality categorical variables ](/v0.2/docs/how_to/query_high_cardinality)[ Next How to handle multiple retrievers ](/v0.2/docs/how_to/query_multiple_retrievers) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Create Index](#create-index) * [Query analysis](#query-analysis) * [Retrieval with query analysis](#retrieval-with-query-analysis) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/prompts_composition
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to compose prompts together On this page How to compose prompts together =============================== Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components. String prompt composition[​](#string-prompt-composition "Direct link to String prompt composition") --------------------------------------------------------------------------------------------------- When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt). import { PromptTemplate } from "@langchain/core/prompts";const prompt = PromptTemplate.fromTemplate( `Tell me a joke about {topic}, make it funny and in {language}`);prompt; PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [ "topic", "language" ], templateFormat: "f-string", template: "Tell me a joke about {topic}, make it funny and in {language}" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "topic", "language" ], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "Tell me a joke about {topic}, make it funny and in {language}", validateTemplate: true} await prompt.format({ topic: "sports", language: "spanish" }); "Tell me a joke about sports, make it funny and in spanish" Chat prompt composition[​](#chat-prompt-composition "Direct link to Chat prompt composition") --------------------------------------------------------------------------------------------- A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt. First, let’s initialize the a [`ChatPromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html). import { AIMessage, HumanMessage, SystemMessage,} from "@langchain/core/messages";const prompt = new SystemMessage("You are a nice pirate"); You can then easily create a pipeline combining it with other messages _or_ message templates. Use a `BaseMessage` when there are no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a [`HumanMessagePromptTemplate`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html).) import { HumanMessagePromptTemplate } from "@langchain/core/prompts";const newPrompt = HumanMessagePromptTemplate.fromTemplate([ prompt, new HumanMessage("Hi"), new AIMessage("what?"), "{input}",]); Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before! await newPrompt.formatMessages({ input: "i said hi" }); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: [ { type: "text", text: "You are a nice pirate" }, { type: "text", text: "Hi" }, { type: "text", text: "what?" }, { type: "text", text: "i said hi" } ], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ { type: "text", text: "You are a nice pirate" }, { type: "text", text: "Hi" }, { type: "text", text: "what?" }, { type: "text", text: "i said hi" } ], name: undefined, additional_kwargs: {}, response_metadata: {} }] Using PipelinePrompt[​](#using-pipelineprompt "Direct link to Using PipelinePrompt") ------------------------------------------------------------------------------------ LangChain includes a class called [`PipelinePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html), which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts: * Final prompt: The final prompt that is returned * Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name. import { PromptTemplate, PipelinePromptTemplate,} from "@langchain/core/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,}); const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt); You are impersonating Elon Musk.Here's an example of an interaction:Q: What's your favorite car?A: TelsaNow, do this for real!Q: What's your favorite social media site?A: Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to compose prompts together. Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to pass through arguments from one step to the next ](/v0.2/docs/how_to/passthrough)[ Next How to use legacy LangChain Agents (AgentExecutor) ](/v0.2/docs/how_to/agent_executor) * [String prompt composition](#string-prompt-composition) * [Chat prompt composition](#chat-prompt-composition) * [Using PipelinePrompt](#using-pipelineprompt) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/query_multiple_retrievers
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle multiple retrievers On this page How to handle multiple retrievers ================================= Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community @langchain/openai zod chromadb yarn add @langchain/community @langchain/openai zod chromadb pnpm add @langchain/community @langchain/openai zod chromadb ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true ### Create Index[​](#create-index "Direct link to Create Index") We will create a vectorstore over fake information. import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retrieverHarrison = vectorstore.asRetriever(1);const texts = ["Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "ankush",});const retrieverAnkush = vectorstore.asRetriever(1); Query analysis[​](#query-analysis "Direct link to Query analysis") ------------------------------------------------------------------ We will use function calling to structure the output. We will let it return multiple queries. import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Query to look up"), person: z .string() .describe( "Person to look things up for. Should be `HARRISON` or `ANKUSH`." ),}); ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); We can see that this allows for routing between retrievers await queryAnalyzer.invoke("where did Harrison Work"); { query: "workplace of Harrison", person: "HARRISON" } await queryAnalyzer.invoke("where did ankush Work"); { query: "Workplace of Ankush", person: "ANKUSH" } Retrieval with query analysis[​](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis") --------------------------------------------------------------------------------------------------------------- So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query const retrievers = { HARRISON: retrieverHarrison, ANKUSH: retrieverAnkush,}; import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const retriever = retrievers[response.person]; return retriever.invoke(response.query, config);};const customChain = new RunnableLambda({ func: chain }); await customChain.invoke("where did Harrison Work"); [ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ] await customChain.invoke("where did ankush Work"); [ Document { pageContent: "Ankush worked at Facebook", metadata: {} } ] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned some techniques for handling multiple retrievers in a query analysis system. Next, check out some of the other query analysis guides in this section, like [how to deal with cases where no query is generated](/v0.2/docs/how_to/query_no_queries). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle multiple queries ](/v0.2/docs/how_to/query_multiple_queries)[ Next How to handle cases where no queries are generated ](/v0.2/docs/how_to/query_no_queries) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Create Index](#create-index) * [Query analysis](#query-analysis) * [Retrieval with query analysis](#retrieval-with-query-analysis) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/agent_executor
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to use legacy LangChain Agents (AgentExecutor) On this page How to use legacy LangChain Agents (AgentExecutor) ================================================== Prerequisites This guide assumes familiarity with the following concepts: * [Tools](/v0.2/docs/concepts#tools) By themselves, language models can’t take actions - they just output text. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it. info This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph). Concepts[​](#concepts "Direct link to Concepts") ------------------------------------------------ Concepts we will cover are: - Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability - Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent - Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online - [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to “remember” past interactions and take them into account when responding to followup questions. - Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith) Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Jupyter Notebook[​](#jupyter-notebook "Direct link to Jupyter Notebook") This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them. This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install. ### Installation[​](#installation "Direct link to Installation") To install LangChain (and `cheerio` for the web loader) run: * npm * yarn * pnpm npm i langchain cheerio yarn add langchain cheerio pnpm add langchain cheerio For more details, see our [Installation guide](/v0.2/docs/how_to/installation/). ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Define tools[​](#define-tools "Direct link to Define tools") ------------------------------------------------------------ We first need to create the tools we want to use. We will use two tools: [Tavily](/v0.2/docs/integrations/tools/tavily_search) (to search online) and then a retriever over a local index we will create ### [Tavily](/v0.2/docs/integrations/tools/tavily_search)[​](#tavily "Direct link to tavily") We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step. Once you create your API key, you will need to export that as: export TAVILY_API_KEY="..." import "cheerio"; // This is required in notebooks to use the `CheerioWebBaseLoader`import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const search = new TavilySearchResults({ maxResults: 2,});await search.invoke("what is the weather in SF"); `[{"title":"Weather in San Francisco","url":"https://www.weatherapi.com/","content":"{'location': {'n`... 1347 more characters ### Retriever[​](#retriever "Direct link to Retriever") We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/v0.2/docs/tutorials/rag). import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/overview");const docs = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const documents = await splitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( documents, new OpenAIEmbeddings());const retriever = vectorStore.asRetriever();(await retriever.invoke("how to upload a dataset"))[0]; Document { pageContent: 'description="A sample dataset in LangSmith.")client.create_examples( inputs=[ {"postfix": '... 891 more characters, metadata: { source: "https://docs.smith.langchain.com/overview", loc: { lines: { from: 4, to: 4 } } }} Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it) import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = await createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",}); ### Tools[​](#tools "Direct link to Tools") Now that we have created both, we can create a list of tools that we will use downstream. const tools = [search, retrieverTool]; Using Language Models[​](#using-language-models "Direct link to Using Language Models") --------------------------------------------------------------------------------------- Next, let’s learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below! ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI(model: "gpt-4"); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); You can call the language model by passing in a list of messages. By default, the response is a `content` string. import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4", temperature: 0 });import { HumanMessage } from "@langchain/core/messages";const response = await model.invoke([new HumanMessage("hi!")]);response.content; "Hello! How can I assist you today?" We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind` to give the language model knowledge of these tools const modelWithTools = model.bindTools(tools); We can now call the model. Let’s first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field. const response = await modelWithTools.invoke([new HumanMessage("Hi!")]);console.log(`Content: ${response.content}`);console.log(`Tool calls: ${response.tool_calls}`); Content: Hello! How can I assist you today?Tool calls: Now, let’s try calling it with some input that would expect a tool to be called. const response = await modelWithTools.invoke([ new HumanMessage("What's the weather in SF?"),]);console.log(`Content: ${response.content}`);console.log(`Tool calls: ${JSON.stringify(response.tool_calls, null, 2)}`); Content:Tool calls: [ { "name": "tavily_search_results_json", "args": { "input": "current weather in San Francisco" }, "id": "call_VcSjZAZkEOx9lcHNZNXAjXkm" }] We can see that there’s now no content, but there is a tool call! It wants us to call the Tavily Search tool. This isn’t calling that tool yet - it’s just telling us to. In order to actually calll it, we’ll want to create our agent. Create the agent[​](#create-the-agent "Direct link to Create the agent") ------------------------------------------------------------------------ Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/v0.2/docs/concepts/#agent_types/). We can first choose the prompt we want to use to guide the agent: import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);console.log(prompt.promptMessages); [ SystemMessagePromptTemplate { lc_serializable: true, lc_kwargs: { prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [], templateFormat: "f-string", template: "You are a helpful assistant" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "You are a helpful assistant", validateTemplate: true } }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], inputVariables: [], additionalOptions: {}, prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [], templateFormat: "f-string", template: "You are a helpful assistant" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "You are a helpful assistant", validateTemplate: true }, messageClass: undefined, chatMessageClass: undefined }, MessagesPlaceholder { lc_serializable: true, lc_kwargs: { variableName: "chat_history", optional: true }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], variableName: "chat_history", optional: true }, HumanMessagePromptTemplate { lc_serializable: true, lc_kwargs: { prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [Array], templateFormat: "f-string", template: "{input}" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "input" ], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "{input}", validateTemplate: true } }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], inputVariables: [ "input" ], additionalOptions: {}, prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [ "input" ], templateFormat: "f-string", template: "{input}" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "input" ], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "{input}", validateTemplate: true }, messageClass: undefined, chatMessageClass: undefined }, MessagesPlaceholder { lc_serializable: true, lc_kwargs: { variableName: "agent_scratchpad", optional: true }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], variableName: "agent_scratchpad", optional: true }] Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/v0.2/docs/concepts/#agents). Note that we are passing in the `model`, not `modelWithTools`. That is because `createToolCallingAgent` will call `.bind` for us under the hood. import { createToolCallingAgent } from "langchain/agents";const agent = await createToolCallingAgent({ llm: model, tools, prompt }); Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,}); Run the agent[​](#run-the-agent "Direct link to Run the agent") --------------------------------------------------------------- We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won’t remember previous interactions). First up, let’s how it responds when there’s no need to call a tool: await agentExecutor.invoke({ input: "hi!" }); { input: "hi!", output: "Hello! How can I assist you today?" } In order to see exactly what is happening under the hood (and to make sure it’s not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/b8051e80-14fd-4931-be0f-6416280bc500/r) Let’s now try it out on an example where it should be invoking the retriever await agentExecutor.invoke({ input: "how can langsmith help with testing?" }); { input: "how can langsmith help with testing?", output: "LangSmith can be a valuable tool for testing in several ways:\n" + "\n" + "1. **Logging Traces**: LangSmith prov"... 960 more characters} Let’s take a look at the [LangSmith trace](https://smith.langchain.com/public/35bd4f0f-aa2f-4ac2-b9a9-89ce0ca306ca/r) to make sure it’s actually calling that. Now let’s try one where it needs to call the search tool: await agentExecutor.invoke({ input: "whats the weather in sf?" }); { input: "whats the weather in sf?", output: "The current weather in San Francisco, California is partly cloudy with a temperature of 12.2°C (54.0"... 176 more characters} We can check out the [LangSmith trace](https://smith.langchain.com/public/dfde6f46-0e7b-4dfe-813c-87d7bfb2ade5/r) to make sure it’s calling the search tool effectively. Adding in memory[​](#adding-in-memory "Direct link to Adding in memory") ------------------------------------------------------------------------ As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. **Note**: The input variable needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name. // Here we pass in an empty list of messages for chat_history because it is the first message in the chatawait agentExecutor.invoke({ input: "hi! my name is bob", chat_history: [] }); { input: "hi! my name is bob", chat_history: [], output: "Hello Bob! How can I assist you today?"} import { AIMessage, HumanMessage } from "@langchain/core/messages";await agentExecutor.invoke({ chat_history: [ new HumanMessage("hi! my name is bob"), new AIMessage("Hello Bob! How can I assist you today?"), ], input: "what's my name?",}); { chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! my name is bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! my name is bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], input: "what's my name?", output: "Your name is Bob. How can I assist you further?"} If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. Because we have multiple inputs, we need to specify two things: * `inputMessagesKey`: The input key to use to add to the conversation history. * `historyMessagesKey`: The key to add the loaded messages into. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history). import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";import { BaseChatMessageHistory } from "@langchain/core/chat_history";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const store = {};function getMessageHistory(sessionId: string): BaseChatMessageHistory { if (!(sessionId in store)) { store[sessionId] = new ChatMessageHistory(); } return store[sessionId];}const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, getMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});await agentWithChatHistory.invoke( { input: "hi! I'm bob" }, { configurable: { sessionId: "<foo>" } }); { input: "hi! I'm bob", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! I'm bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! I'm bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello, Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello, Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Hello, Bob! How can I assist you today?"} await agentWithChatHistory.invoke( { input: "what's my name?" }, { configurable: { sessionId: "<foo>" } }); { input: "what's my name?", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! I'm bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! I'm bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello, Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello, Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what's my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what's my name?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Your name is Bob. How can I assist you further?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Your name is Bob. How can I assist you further?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Your name is Bob. How can I assist you further?"} Example LangSmith trace: [https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r](https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r) Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn! info This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph). You can also see [this guide to help migrate to LangGraph](/v0.2/docs/how_to/migrate_agent). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to compose prompts together ](/v0.2/docs/how_to/prompts_composition)[ Next How to add values to a chain's state ](/v0.2/docs/how_to/assign) * [Concepts](#concepts) * [Setup](#setup) * [Jupyter Notebook](#jupyter-notebook) * [Installation](#installation) * [LangSmith](#langsmith) * [Define tools](#define-tools) * [Tavily](#tavily) * [Retriever](#retriever) * [Tools](#tools) * [Using Language Models](#using-language-models) * [Create the agent](#create-the-agent) * [Run the agent](#run-the-agent) * [Adding in memory](#adding-in-memory) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/chatbot
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build a Chatbot On this page Build a Chatbot =============== Overview[​](#overview "Direct link to Overview") ------------------------------------------------ Prerequisites This guide assumes familiarity with the following concepts: * [Chat Models](/v0.2/docs/concepts/#chat-models) * [Prompt Templates](/v0.2/docs/concepts/#prompt-templates) * [Chat History](/v0.2/docs/concepts/#chat-history) We’ll go over an example of how to design and implement an LLM-powered chatbot. This chatbot will be able to have a conversation and remember previous interactions. Note that this chatbot that we build will only use the language model to have a conversation. There are several other related concepts that you may be looking for: * [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history): Enable a chatbot experience over an external source of data * [Agents](/v0.2/docs/tutorials/agents): Build a chatbot that can take actions This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Installation[​](#installation "Direct link to Installation") To install LangChain run: * npm * yarn * pnpm npm i langchain yarn add langchain pnpm add langchain For more details, see our [Installation guide](/v0.2/docs/how_to/installation). ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Quickstart[​](#quickstart "Direct link to Quickstart") ------------------------------------------------------ First up, let’s learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below! ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI(model="gpt-3.5-turbo"); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); Let’s first use the model directly. `ChatModel`s are instances of LangChain “Runnables”, which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method. import { HumanMessage } from "@langchain/core/messages";await model.invoke([new HumanMessage({ content: "Hi! I'm Bob" })]); AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Bob, it's nice to meet you! I'm an AI assistant created by Anthropic. How are you doing today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_015Qvu91azZviks5VzGvYT7z", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: "end_turn" }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Bob, it's nice to meet you! I'm an AI assistant created by Anthropic. How are you doing today?", name: undefined, additional_kwargs: { id: "msg_015Qvu91azZviks5VzGvYT7z", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: "end_turn" }, response_metadata: { id: "msg_015Qvu91azZviks5VzGvYT7z", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: "end_turn" }, tool_calls: [], invalid_tool_calls: []} The model on its own does not have any concept of state. For example, if you ask a followup question: await model.invoke([new HumanMessage({ content: "What's my name?" })]); AIMessage { lc_serializable: true, lc_kwargs: { content: "I'm afraid I don't actually know your name. I'm Claude, an AI assistant created by Anthropic.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01TNDCwsU7ruVoqJwjKqNrzJ", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 27 }, stop_reason: "end_turn" }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm afraid I don't actually know your name. I'm Claude, an AI assistant created by Anthropic.", name: undefined, additional_kwargs: { id: "msg_01TNDCwsU7ruVoqJwjKqNrzJ", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 27 }, stop_reason: "end_turn" }, response_metadata: { id: "msg_01TNDCwsU7ruVoqJwjKqNrzJ", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 12, output_tokens: 27 }, stop_reason: "end_turn" }, tool_calls: [], invalid_tool_calls: []} Let’s take a look at the example [LangSmith trace](https://smith.langchain.com/public/e5a0ae1b-32b9-4beb-836d-38f40bfa6762/r) We can see that it doesn’t take the previous conversation turn into context, and cannot answer the question. This makes for a terrible chatbot experience! To get around this, we need to pass the entire conversation history into the model. Let’s see what happens when we do that: import { AIMessage } from "@langchain/core/messages";await model.invoke([ new HumanMessage({ content: "Hi! I'm Bob" }), new AIMessage({ content: "Hello Bob! How can I assist you today?" }), new HumanMessage({ content: "What's my name?" }),]); AIMessage { lc_serializable: true, lc_kwargs: { content: "You said your name is Bob.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { id: "msg_01AEQMme3Z1MFKHW8PeDBJ7g", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 33, output_tokens: 10 }, stop_reason: "end_turn" }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You said your name is Bob.", name: undefined, additional_kwargs: { id: "msg_01AEQMme3Z1MFKHW8PeDBJ7g", type: "message", role: "assistant", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 33, output_tokens: 10 }, stop_reason: "end_turn" }, response_metadata: { id: "msg_01AEQMme3Z1MFKHW8PeDBJ7g", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 33, output_tokens: 10 }, stop_reason: "end_turn" }, tool_calls: [], invalid_tool_calls: []} And now we can see that we get a good response! This is the basic idea underpinning a chatbot’s ability to interact conversationally. So how do we best implement this? Message History[​](#message-history "Direct link to Message History") --------------------------------------------------------------------- We can use a Message History class to wrap our model and make it stateful. This will keep track of inputs and outputs of the model, and store them in some datastore. Future interactions will then load those messages and pass them into the chain as part of the input. Let’s see how to use this! We import the relevant classes and set up our chain which wraps the model and adds in this message history. A key part here is the function we pass into as the `getSessionHistory()`. This function is expected to take in a `sessionId` and return a Message History object. This `sessionId` is used to distinguish between separate conversations, and should be passed in as part of the config when calling the new chain. Let’s also create a simple chain by adding a prompt to help with formatting: // We use an ephemeral, in-memory chat history for this demo.import { InMemoryChatMessageHistory } from "@langchain/core/chat_history";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const messageHistories: Record<string, InMemoryChatMessageHistory> = {};const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are a helpful assistant who remembers all details the user shares with you.`, ], ["placeholder", "{chat_history}"], ["human", "{input}"],]);const chain = prompt.pipe(model);const withMessageHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: async (sessionId) => { if (messageHistories[sessionId] === undefined) { messageHistories[sessionId] = new InMemoryChatMessageHistory(); } return messageHistories[sessionId]; }, inputMessagesKey: "input", historyMessagesKey: "chat_history",}); We now need to create a `config` that we pass into the runnable every time. This config contains information that is not part of the input directly, but is still useful. In this case, we want to include a `session_id`. This should look like: const config = { configurable: { sessionId: "abc2", },};const response = await withMessageHistory.invoke( { input: "Hi! I'm Bob", }, config);response.content; "Hi Bob, nice to meet you! I'm an AI assistant. I'll remember that your name is Bob as we continue ou"... 110 more characters const followupResponse = await withMessageHistory.invoke( { input: "What's my name?", }, config);followupResponse.content; "Your name is Bob. You introduced yourself as Bob at the start of our conversation." Great! Our chatbot now remembers things about us. If we change the config to reference a different `session_id`, we can see that it starts the conversation fresh. const config = { configurable: { sessionId: "abc3", },};const response = await withMessageHistory.invoke( { input: "What's my name?", }, config);response.content; "I'm afraid I don't actually know your name. As an AI assistant without any prior context about you, "... 61 more characters However, we can always go back to the original conversation (since we are persisting it in a database) const config = { configurable: { sessionId: "abc2", },};const response = await withMessageHistory.invoke( { input: "What's my name?", }, config);response.content; `Your name is Bob. I clearly remember you telling me "Hi! I'm Bob" when we started talking.` This is how we can support a chatbot having conversations with many users! Managing Conversation History[​](#managing-conversation-history "Direct link to Managing Conversation History") --------------------------------------------------------------------------------------------------------------- One important concept to understand when building chatbots is how to manage conversation history. If left unmanaged, the list of messages will grow unbounded and potentially overflow the context window of the LLM. Therefore, it is important to add a step that limits the size of the messages you are passing in. **Importantly, you will want to do this BEFORE the prompt template but AFTER you load previous messages from Message History.** We can do this by adding a simple step in front of the prompt that modifies the `chat_history` key appropriately, and then wrap that new chain in the Message History class. First, let’s define a function that will modify the messages passed in. Let’s make it so that it selects the 10 most recent messages. We can then create a new chain by adding that at the start. import type { BaseMessage } from "@langchain/core/messages";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const filterMessages = ({ chat_history }: { chat_history: BaseMessage[] }) => { return chat_history.slice(-10);};const chain = RunnableSequence.from([ RunnablePassthrough.assign({ chat_history: filterMessages, }), prompt, model,]); Let’s now try it out! If we create a list of messages more than 10 messages long, we can see what it no longer remembers information in the early messages. const messages = [ new HumanMessage({ content: "hi! I'm bob" }), new AIMessage({ content: "hi!" }), new HumanMessage({ content: "I like vanilla ice cream" }), new AIMessage({ content: "nice" }), new HumanMessage({ content: "whats 2 + 2" }), new AIMessage({ content: "4" }), new HumanMessage({ content: "thanks" }), new AIMessage({ content: "No problem!" }), new HumanMessage({ content: "having fun?" }), new AIMessage({ content: "yes!" }), new HumanMessage({ content: "That's great!" }), new AIMessage({ content: "yes it is!" }),]; const response = await chain.invoke({ chat_history: messages, input: "what's my name?",});response.content; "I'm afraid I don't actually know your name. You haven't provided that detail to me yet." But if we ask about information that is within the last ten messages, it still remembers it const response = await chain.invoke({ chat_history: messages, input: "what's my fav ice cream",});response.content; "You said earlier that you like vanilla ice cream." Let’s now wrap this chain in a `RunnableWithMessageHistory` constructor. For demo purposes, we will also slightly modify our `getMessageHistory()` method to always start new sessions with the previously declared list of 10 messages to simulate several conversation turns: const messageHistories: Record<string, InMemoryChatMessageHistory> = {};const withMessageHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: async (sessionId) => { if (messageHistories[sessionId] === undefined) { const messageHistory = new InMemoryChatMessageHistory(); await messageHistory.addMessages(messages); messageHistories[sessionId] = messageHistory; } return messageHistories[sessionId]; }, inputMessagesKey: "input", historyMessagesKey: "chat_history",});const config = { configurable: { sessionId: "abc4", },};const response = await withMessageHistory.invoke( { input: "whats my name?", }, config);response.content; "I'm afraid I don't actually know your name since you haven't provided it to me yet. I don't have pe"... 66 more characters There’s now two new messages in the chat history. This means that even more information that used to be accessible in our conversation history is no longer available! const response = await withMessageHistory.invoke( { input: "whats my favorite ice cream?", }, config);response.content; "I'm sorry, I don't have any information about your favorite ice cream flavor since you haven't share"... 167 more characters If you take a look at LangSmith, you can see exactly what is happening under the hood in the [LangSmith trace](https://smith.langchain.com/public/ebc2e1e7-0703-43f7-a476-8cb8cbd7f61a/r). Navigate to the chat model call to see exactly which messages are getting filtered out. Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- Now we’ve got a functional chatbot. However, one _really_ important UX consideration for chatbot application is streaming. LLMs can sometimes take a while to respond, and so in order to improve the user experience one thing that most application do is stream back each token as it is generated. This allows the user to see progress. It’s actually super easy to do this! All chains expose a `.stream()` method, and ones that use message history are no different. We can simply use that method to get back a streaming response. const config = { configurable: { sessionId: "abc6", },};const stream = await withMessageHistory.stream( { input: "hi! I'm todd. tell me a joke", }, config);for await (const chunk of stream) { console.log("|", chunk.content);} || Hi| Tod| d!| Here| 's| a| silly| joke| for| you| :|Why| di| d the| tom| ato| turn| re| d?| Because| it| saw| the| sal| a| d| dressing| !|| Next Steps[​](#next-steps "Direct link to Next Steps") ------------------------------------------------------ Now that you understand the basics of how to create a chatbot in LangChain, some more advanced tutorials you may be interested in are: * [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history): Enable a chatbot experience over an external source of data * [Agents](/v0.2/docs/tutorials/agents): Build a chatbot that can take actions If you want to dive deeper on specifics, some things worth checking out are: * [Streaming](/v0.2/docs/how_to/streaming): streaming is _crucial_ for chat applications * [How to add message history](/v0.2/docs/how_to/message_history): for a deeper dive into all things related to message history * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Build a Simple LLM Application with LCEL ](/v0.2/docs/tutorials/llm_chain)[ Next Build an Agent ](/v0.2/docs/tutorials/agents) * [Overview](#overview) * [Setup](#setup) * [Installation](#installation) * [LangSmith](#langsmith) * [Quickstart](#quickstart) * [Message History](#message-history) * [Managing Conversation History](#managing-conversation-history) * [Streaming](#streaming) * [Next Steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/llm_chain
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build a Simple LLM Application with LCEL On this page Build a Simple LLM Application with LCEL ======================================== In this quickstart we’ll show you how to build a simple LLM application with LangChain. This application will translate text from English into another language. This is a relatively simple LLM application - it’s just a single LLM call plus some prompting. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call! After reading this tutorial, you’ll have a high level overview of: * Using [language models](/v0.2/docs/concepts/#chat-models) * Using [PromptTemplates](/v0.2/docs/concepts/#prompt-templates) and [OutputParsers](/v0.2/docs/concepts/#output-parsers) * Using [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) to chain components together * Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith) Let’s dive in! Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Installation[​](#installation "Direct link to Installation") To install LangChain run: * npm * yarn * pnpm npm i langchain yarn add langchain pnpm add langchain For more details, see our [Installation guide](/v0.2/docs/how_to/installation/). ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." Using Language Models[​](#using-language-models "Direct link to Using Language Models") --------------------------------------------------------------------------------------- First up, let’s learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below! ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI(model: "gpt-4"); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); Let’s first use the model directly. `ChatModel`s are instances of LangChain “Runnables”, which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method. import { HumanMessage, SystemMessage } from "@langchain/core/messages";const messages = [ new SystemMessage("Translate the following from English into Italian"), new HumanMessage("hi!"),];await model.invoke(messages); AIMessage { lc_serializable: true, lc_kwargs: { content: "ciao!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "ciao!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 3, promptTokens: 20, totalTokens: 23 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []} If we’ve enable LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/45f1a650-38fb-41e1-9b61-becc0684f2ce/r) OutputParsers[​](#outputparsers "Direct link to OutputParsers") --------------------------------------------------------------- Notice that the response from the model is an `AIMessage`. This contains a string response along with other metadata about the response. Oftentimes we may just want to work with the string response. We can parse out just this response by using a simple output parser. We first import the simple output parser. import { StringOutputParser } from "@langchain/core/output_parsers";const parser = new StringOutputParser(); One way to use it is to use it by itself. For example, we could save the result of the language model call and then pass it to the parser. const result = await model.invoke(messages); await parser.invoke(result); "ciao!" Chaining together components with LCEL[​](#chaining-together-components-with-lcel "Direct link to Chaining together components with LCEL") ------------------------------------------------------------------------------------------------------------------------------------------ We can also “chain” the model to the output parser. This means this output parser will get called with the output from the model. This chain takes on the input type of the language model (string or list of message) and returns the output type of the output parser (string). We can create the chain using the `.pipe()` method. The `.pipe()` method is used in LangChain to combine two elements together. const chain = model.pipe(parser); await chain.invoke(messages); "Ciao!" This is a simple example of using [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) to chain together LangChain modules. There are several benefits to this approach, including optimized streaming and tracing support. If we now look at LangSmith, we can see that the chain has two steps: first the language model is called, then the result of that is passed to the output parser. We can see the [LangSmith trace](https://smith.langchain.com/public/05bec1c1-fc51-4b2c-ab3b-4b63709e4462/r) Prompt Templates[​](#prompt-templates "Direct link to Prompt Templates") ------------------------------------------------------------------------ Right now we are passing a list of messages directly into the language model. Where does this list of messages come from? Usually it constructed from a combination of user input and application logic. This application logic usually takes the raw user input and transforms it into a list of messages ready to pass to the language model. Common transformations include adding a system message or formatting a template with the user input. PromptTemplates are a concept in LangChain designed to assist with this transformation. They take in raw user input and return data (a prompt) that is ready to pass into a language model. Let’s create a PromptTemplate here. It will take in two user variables: * `language`: The language to translate text into * `text`: The text to translate import { ChatPromptTemplate } from "@langchain/core/prompts"; First, let’s create a string that we will format to be the system message: const systemTemplate = "Translate the following into {language}:"; Next, we can create the PromptTemplate. This will be a combination of the `systemTemplate` as well as a simpler template for where to put the text const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", systemTemplate], ["user", "{text}"],]); The input to this prompt template is a dictionary. We can play around with this prompt template by itself to see what it does by itself const result = await promptTemplate.invoke({ language: "italian", text: "hi" });result; ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} } ]} We can see that it returns a `ChatPromptValue` that consists of two messages. If we want to access the messages directly we do: result.toChatMessages(); [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} }] We can now combine this with the model and the output parser from above. This will chain all three components together. const chain = promptTemplate.pipe(model).pipe(parser); await chain.invoke({ language: "italian", text: "hi" }); "ciao" If we take a look at the LangSmith trace, we can see all three components show up in the [LangSmith trace](https://smith.langchain.com/public/cef6edcd-39ed-4c1e-86f7-491a1b611aeb/r) Conclusion[​](#conclusion "Direct link to Conclusion") ------------------------------------------------------ That’s it! In this tutorial you’ve learned how to create your first simple LLM application. You’ve learned how to work with language models, how to parse their outputs, how to create a prompt template, chaining them together with LCEL, and how to get great observability into chains you create with LangSmith. This just scratches the surface of what you will want to learn to become a proficient AI Engineer. Luckily - we’ve got a lot of other resources! For further reading on the core concepts of LangChain, we’ve got detailed [Conceptual Guides](/v0.2/docs/concepts). If you have more specific questions on these concepts, check out the following sections of the how-to guides: * [LangChain Expression Language (LCEL)](/v0.2/docs/how_to/#langchain-expression-language) * [Prompt templates](/v0.2/docs/how_to/#prompt-templates) * [Chat models](/v0.2/docs/how_to/#chat-models) * [Output parsers](/v0.2/docs/how_to/#output-parsers) And the LangSmith docs: * [LangSmith](https://docs.smith.langchain.com) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tutorials ](/v0.2/docs/tutorials/)[ Next Build a Chatbot ](/v0.2/docs/tutorials/chatbot) * [Setup](#setup) * [Installation](#installation) * [LangSmith](#langsmith) * [Using Language Models](#using-language-models) * [OutputParsers](#outputparsers) * [Chaining together components with LCEL](#chaining-together-components-with-lcel) * [Prompt Templates](#prompt-templates) * [Conclusion](#conclusion)
null
https://js.langchain.com/v0.2/docs/how_to/query_no_queries
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to handle cases where no queries are generated On this page How to handle cases where no queries are generated ================================================== Prerequisites This guide assumes familiarity with the following: * [Query analysis](/v0.2/docs/tutorials/query_analysis) Sometimes, a query analysis technique may allow for any number of queries to be generated - including no queries! In this case, our overall chain will need to inspect the result of the query analysis before deciding whether to call the retriever or not. We will use mock data for this example. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install dependencies[​](#install-dependencies "Direct link to Install dependencies") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community @langchain/openai zod chromadb yarn add @langchain/community @langchain/openai zod chromadb pnpm add @langchain/community @langchain/openai zod chromadb ### Set environment variables[​](#set-environment-variables "Direct link to Set environment variables") OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true ### Create Index[​](#create-index "Direct link to Create Index") We will create a vectorstore over fake information. import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retriever = vectorstore.asRetriever(1); Query analysis[​](#query-analysis "Direct link to Query analysis") ------------------------------------------------------------------ We will use function calling to structure the output. However, we will configure the LLM such that is doesn’t NEED to call the function representing a search query (should it decide not to). We will also then use a prompt to do query analysis that explicitly lays when it should and shouldn’t make a search. import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Similarity search query applied to job record."),}); ### Pick your chat model: * OpenAI * Anthropic * FireworksAI * MistralAI * Groq * VertexAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community yarn add @langchain/community pnpm add @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai yarn add @langchain/mistralai pnpm add @langchain/mistralai #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq #### Add environment variables GROQ_API_KEY=your-api-key #### Instantiate the model import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai #### Add environment variables GOOGLE_APPLICATION_CREDENTIALS=credentials.json #### Instantiate the model import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0}); import { zodToJsonSchema } from "zod-to-json-schema";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.You do not NEED to look things up. If you don't need to, then just respond normally.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.bind({ tools: [ { type: "function" as const, function: { name: "search", description: "Search over a database of job records.", parameters: zodToJsonSchema(searchSchema), }, }, ],});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]); We can see that by invoking this we get an message that sometimes - but not always - returns a tool call. await queryAnalyzer.invoke("where did Harrison work"); AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: [Object] } ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: { name: "search", arguments: '{"query":"Harrison"}' } } ] }} await queryAnalyzer.invoke("hi!"); AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }} Retrieval with query analysis[​](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis") --------------------------------------------------------------------------------------------------------------- So how would we include this in a chain? Let’s look at an example below. import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";const outputParser = new JsonOutputKeyToolsParser({ keyName: "search",}); import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); if ( "tool_calls" in response.additional_kwargs && response.additional_kwargs.tool_calls !== undefined ) { const query = await outputParser.invoke(response, config); return retriever.invoke(query[0].query, config); } else { return response; }};const customChain = new RunnableLambda({ func: chain }); await customChain.invoke("where did Harrison Work"); [ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ] await customChain.invoke("hi!"); AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned some techniques for handling irrelevant questions in query analysis systems. Next, check out some of the other query analysis guides in this section, like [how to use few-shot examples](/v0.2/docs/how_to/query_few_shot). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle multiple retrievers ](/v0.2/docs/how_to/query_multiple_retrievers)[ Next How to recursively split text by characters ](/v0.2/docs/how_to/recursive_text_splitter) * [Setup](#setup) * [Install dependencies](#install-dependencies) * [Set environment variables](#set-environment-variables) * [Create Index](#create-index) * [Query analysis](#query-analysis) * [Retrieval with query analysis](#retrieval-with-query-analysis) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/parallel
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to invoke runnables in parallel On this page How to invoke runnables in parallel =================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) The [`RunnableParallel`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableParallel.html) (also known as a `RunnableMap`) primitive is an object whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the initial input to the `RunnableParallel`. The final return value is an object with the results of each value under its appropriate key. Formatting with `RunnableParallels`[​](#formatting-with-runnableparallels "Direct link to formatting-with-runnableparallels") ----------------------------------------------------------------------------------------------------------------------------- `RunnableParallels` are useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. You can use them to split or fork the chain so that multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following: Input / \ / \ Branch1 Branch2 \ / \ / Combine Below, the input to each chain in the `RunnableParallel` is expected to be an object with a key for `"topic"`. We can satisfy that requirement by invoking our chain with an object matching that structure. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic @langchain/cohere yarn add @langchain/anthropic @langchain/cohere pnpm add @langchain/anthropic @langchain/cohere import { PromptTemplate } from "@langchain/core/prompts";import { RunnableMap } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({});const jokeChain = PromptTemplate.fromTemplate( "Tell me a joke about {topic}").pipe(model);const poemChain = PromptTemplate.fromTemplate( "write a 2-line poem about {topic}").pipe(model);const mapChain = RunnableMap.from({ joke: jokeChain, poem: poemChain,});const result = await mapChain.invoke({ topic: "bear" });console.log(result);/* { joke: AIMessage { content: " Here's a silly joke about a bear:\n" + '\n' + 'What do you call a bear with no teeth?\n' + 'A gummy bear!', additional_kwargs: {} }, poem: AIMessage { content: ' Here is a 2-line poem about a bear:\n' + '\n' + 'Furry and wild, the bear roams free \n' + 'Foraging the forest, strong as can be', additional_kwargs: {} } }*/ #### API Reference: * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [RunnableMap](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableMap.html) from `@langchain/core/runnables` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Manipulating outputs/inputs[​](#manipulating-outputsinputs "Direct link to Manipulating outputs/inputs") -------------------------------------------------------------------------------------------------------- Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. Note below that the object within the `RunnableSequence.from()` call is automatically coerced into a runnable map. All keys of the object must have values that are runnables or can be themselves coerced to runnables (functions to `RunnableLambda`s or objects to `RunnableMap`s). This coercion will also occur when composing chains via the `.pipe()` method. import { CohereEmbeddings } from "@langchain/cohere";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";import { ChatAnthropic } from "@langchain/anthropic";import { MemoryVectorStore } from "langchain/vectorstores/memory";const model = new ChatAnthropic();const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }], new CohereEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = PromptTemplate.fromTemplate(template);const formatDocs = (docs: Document[]) => docs.map((doc) => doc.pageContent);const retrievalChain = RunnableSequence.from([ { context: retriever.pipe(formatDocs), question: new RunnablePassthrough() }, prompt, model, new StringOutputParser(),]);const result = await retrievalChain.invoke( "what is the powerhouse of the cell?");console.log(result);/* Based on the given context, the powerhouse of the cell is mitochondria.*/ #### API Reference: * [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You now know some ways to format and parallelize chain steps with `RunnableParallel`. Next, you might be interested in [using custom logic](/v0.2/docs/how_to/functions/) in your chains. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to parse XML output ](/v0.2/docs/how_to/output_parser_xml)[ Next How to retrieve the whole document for a chunk ](/v0.2/docs/how_to/parent_document_retriever) * [Formatting with `RunnableParallels`](#formatting-with-runnableparallels) * [Manipulating outputs/inputs](#manipulating-outputsinputs) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/agents
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build an Agent On this page Build an Agent ============== Prerequisites This guide assumes familiarity with the following concepts: * [Chat Models](/v0.2/docs/concepts/#chat-models) * [Tools](/v0.2/docs/concepts/#tools) * [Agents](/v0.2/docs/concepts/#agents) By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it. Setup: LangSmith[​](#setup-langsmith "Direct link to Setup: LangSmith") ----------------------------------------------------------------------- By definition, agents take a self-determined, input-dependent sequence of steps before returning a user-facing output. This makes debugging these systems particularly tricky, and observability particularly important. [LangSmith](https://smith.langchain.com) is especially useful for such cases. When building with LangChain, all steps will automatically be traced in LangSmith. To set up LangSmith we just need set the following environment variables: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="<your-api-key>" Define tools[​](#define-tools "Direct link to Define tools") ------------------------------------------------------------ We first need to create the tools we want to use. We will use two tools: [Tavily](https://app.tavily.com) (to search online) and then a retriever over a local index we will create. ### [Tavily](https://app.tavily.com)[​](#tavily "Direct link to tavily") We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires a Tavily API key set as an environment variable named `TAVILY_API_KEY` - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step. import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const searchTool = new TavilySearchResults();const toolResult = await searchTool.invoke("what is the weather in SF?");console.log(toolResult);/* [{"title":"Weather in December 2023 in San Francisco, California, USA","url":"https://www.timeanddate.com/weather/@5391959/historic?month=12&year=2023","content":"Currently: 52 °F. Broken clouds. (Weather station: San Francisco International Airport, USA). See more current weather Select month: December 2023 Weather in San Francisco — Graph °F Sun, Dec 17 Lo:55 6 pm Hi:57 4 Mon, Dec 18 Lo:54 12 am Hi:55 7 Lo:54 6 am Hi:55 10 Lo:57 12 pm Hi:64 9 Lo:63 6 pm Hi:64 14 Tue, Dec 19 Lo:61","score":0.96006},...]*/ ### Retriever[​](#retriever "Direct link to Retriever") We will also create a retriever over some data of our own. For a deeper explanation of each step here, see our [how to guides](/v0.2/docs/how_to/). import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const docs = await splitter.splitDocuments(rawDocs);const vectorstore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const retrieverResult = await retriever.invoke("how to upload a dataset");console.log(retrieverResult[0]);/* Document { pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to", metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: { lines: [Object] } } }*/ Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it): import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",}); ### Tools[​](#tools "Direct link to Tools") Now that we have created both, we can create a list of tools that we will use downstream: const tools = [searchTool, retrieverTool]; Create the agent[​](#create-the-agent "Direct link to Create the agent") ------------------------------------------------------------------------ Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](https://js.langchain.com/v0.1/docs/modules/agents/agent_types/). First, we choose the LLM we want to be guiding the agent. import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,}); Next, we choose the prompt we want to use to guide the agent: import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent"); Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to thing about these components, see our [conceptual guide](/v0.2/docs/concepts#agents). import { createOpenAIFunctionsAgent } from "langchain/agents";const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,}); Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to thing about these components, see our [conceptual guide](/v0.2/docs/concepts#agents). import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,}); Run the agent[​](#run-the-agent "Direct link to Run the agent") --------------------------------------------------------------- We can now run the agent on a few queries! Note that for now, these are all stateless queries (it won’t remember previous interactions). const result1 = await agentExecutor.invoke({ input: "hi!",});console.log(result1);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "hi!" } [chain/end] [1:chain:AgentExecutor] [1.36s] Exiting Chain run with output: { "output": "Hello! How can I assist you today?" } { input: 'hi!', output: 'Hello! How can I assist you today?' }*/ const result2 = await agentExecutor.invoke({ input: "how can langsmith help with testing?",});console.log(result2);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "how can langsmith help with testing?" } [chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:parser:OpenAIFunctionsAgentOutputParser] [66ms] Exiting Chain run with output: { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] } [tool/start] [1:chain:AgentExecutor > 8:tool:langsmith_search] Entering Tool run with input: "{"query":"how can LangSmith help with testing?"}" [retriever/start] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] Entering Retriever run with input: { "query": "how can LangSmith help with testing?" } [retriever/end] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] [294ms] Exiting Retriever run with output: { "documents": [ { "pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } }, { "pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 3, "to": 3 } } } }, { "pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 4, "to": 7 } } } }, { "pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } } ] } [chain/start] [1:chain:AgentExecutor > 10:chain:RunnableAgent] Entering Chain run with input: { "input": "how can langsmith help with testing?", "steps": [ { "action": { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] }, "observation": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the\n\nthe time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string\n\ninputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies\n\nfeedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime" } ] } [chain/end] [1:chain:AgentExecutor] [5.83s] Exiting Chain run with output: { "input": "how can langsmith help with testing?", "output": "LangSmith can help with testing in several ways:\n\n1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n\n2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n\n3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n\nOverall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation." } { input: 'how can langsmith help with testing?', output: 'LangSmith can help with testing in several ways:\n' + '\n' + '1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' + '\n' + "2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" + '\n' + '3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' + '\n' + '4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' + '\n' + '5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' + '\n' + 'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.' }*/ Adding in memory[​](#adding-in-memory "Direct link to Adding in memory") ------------------------------------------------------------------------ As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. **Note:** the input variable below needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name. const result3 = await agentExecutor.invoke({ input: "hi! my name is cob.", chat_history: [],});console.log(result3);/* { input: 'hi! my name is cob.', chat_history: [], output: "Hello Cob! It's nice to meet you. How can I assist you today?" }*/ import { HumanMessage, AIMessage } from "@langchain/core/messages";const result4 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob."), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result4);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob.', additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/ If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history/). import { ChatMessageHistory } from "langchain/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const messageHistory = new ChatMessageHistory();const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. getMessageHistory: (_sessionId) => messageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});const result5 = await agentWithChatHistory.invoke( { input: "hi! i'm cob", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result5);/* { input: "hi! i'm cob", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Hello Cob! How can I assist you today?' }*/ const result6 = await agentWithChatHistory.invoke( { input: "what's my name?", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result6);/* { input: "what's my name?", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} }, HumanMessage { content: "what's my name?", additional_kwargs: {} }, AIMessage { content: 'Your name is Cob. How can I assist you today, Cob?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/ Conclusion[​](#conclusion "Direct link to Conclusion") ------------------------------------------------------ That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn! Head back to the [main agent page](/v0.2/docs/how_to/agent_executor/) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more! * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Build a Chatbot ](/v0.2/docs/tutorials/chatbot)[ Next Build an Extraction Chain ](/v0.2/docs/tutorials/extraction) * [Setup: LangSmith](#setup-langsmith) * [Define tools](#define-tools) * [Tavily](#tavily) * [Retriever](#retriever) * [Tools](#tools) * [Create the agent](#create-the-agent) * [Run the agent](#run-the-agent) * [Adding in memory](#adding-in-memory) * [Conclusion](#conclusion)
null
https://js.langchain.com/v0.2/docs/how_to/binding
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to attach runtime arguments to a Runnable On this page How to attach runtime arguments to a Runnable ============================================= Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Tool calling](/v0.2/docs/how_to/tool_calling/) Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#bind) method to set these arguments ahead of time. Binding stop sequences[​](#binding-stop-sequences "Direct link to Binding stop sequences") ------------------------------------------------------------------------------------------ Suppose we have a simple prompt + model chain: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const prompt = ChatPromptTemplate.fromMessages([ [ "system", "Write out the following equation using algebraic symbols then solve it. Use the format\n\nEQUATION:...\nSOLUTION:...\n\n", ], ["human", "{equation_statement}"],]);const model = new ChatOpenAI({ temperature: 0 });const runnable = prompt.pipe(model).pipe(new StringOutputParser());const res = await runnable.invoke({ equation_statement: "x raised to the third plus seven equals 12",});console.log(res); EQUATION: x^3 + 7 = 12SOLUTION:Subtract 7 from both sides:x^3 = 5Take the cube root of both sides:x = ∛5 and want to call the model with certain `stop` words so that we shorten the output, which is useful in certain types of prompting techniques. While we can pass some arguments into the constructor, other runtime args use the `.bind()` method as follows: const runnable = prompt .pipe(model.bind({ stop: "SOLUTION" })) .pipe(new StringOutputParser());const res = await runnable.invoke({ equation_statement: "x raised to the third plus seven equals 12",});console.log(res); EQUATION: x^3 + 7 = 12 What you can bind to a Runnable will depend on the extra parameters you can pass when invoking it. Attaching OpenAI tools[​](#attaching-openai-tools "Direct link to Attaching OpenAI tools") ------------------------------------------------------------------------------------------ Another common use-case is tool calling. While you should generally use the [`.bind_tools()`](/v0.2/docs/how_to/tool_calling/) method for tool-calling models, you can also bind provider-specific args directly if you want lower level control: const tools = [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, },];const model = new ChatOpenAI({ model: "gpt-4o" }).bind({ tools });await model.invoke("What's the weather in SF, NYC and LA?"); AIMessage { lc_serializable: true, lc_kwargs: { content: "", tool_calls: [ { name: "get_current_weather", args: { location: "San Francisco, CA" }, id: "call_iDKz4zU8PKBaaIT052fJkMMF" }, { name: "get_current_weather", args: { location: "New York, NY" }, id: "call_niQwZDOqO6OJTBiDBFG8FODc" }, { name: "get_current_weather", args: { location: "Los Angeles, CA" }, id: "call_zLXH2cDVQy0nAVC0ViWuEP4m" } ], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_iDKz4zU8PKBaaIT052fJkMMF", type: "function", function: [Object] }, { id: "call_niQwZDOqO6OJTBiDBFG8FODc", type: "function", function: [Object] }, { id: "call_zLXH2cDVQy0nAVC0ViWuEP4m", type: "function", function: [Object] } ] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_iDKz4zU8PKBaaIT052fJkMMF", type: "function", function: { name: "get_current_weather", arguments: '{"location": "San Francisco, CA"}' } }, { id: "call_niQwZDOqO6OJTBiDBFG8FODc", type: "function", function: { name: "get_current_weather", arguments: '{"location": "New York, NY"}' } }, { id: "call_zLXH2cDVQy0nAVC0ViWuEP4m", type: "function", function: { name: "get_current_weather", arguments: '{"location": "Los Angeles, CA"}' } } ] }, response_metadata: { tokenUsage: { completionTokens: 70, promptTokens: 82, totalTokens: 152 }, finish_reason: "tool_calls" }, tool_calls: [ { name: "get_current_weather", args: { location: "San Francisco, CA" }, id: "call_iDKz4zU8PKBaaIT052fJkMMF" }, { name: "get_current_weather", args: { location: "New York, NY" }, id: "call_niQwZDOqO6OJTBiDBFG8FODc" }, { name: "get_current_weather", args: { location: "Los Angeles, CA" }, id: "call_zLXH2cDVQy0nAVC0ViWuEP4m" } ], invalid_tool_calls: []} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You now know how to bind runtime arguments to a Runnable. Next, you might be interested in our how-to guides on [passing data through a chain](/v0.2/docs/how_to/passthrough/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add values to a chain's state ](/v0.2/docs/how_to/assign)[ Next How to cache embedding results ](/v0.2/docs/how_to/caching_embeddings) * [Binding stop sequences](#binding-stop-sequences) * [Attaching OpenAI tools](#attaching-openai-tools) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/prompts_partial
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to partially format prompt templates On this page How to partially format prompt templates ======================================== Prerequisites This guide assumes familiarity with the following concepts: * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain. Partial with strings[​](#partial-with-strings "Direct link to Partial with strings") ------------------------------------------------------------------------------------ One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz You can also just initialize the prompt with the partialed variables. const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz Partial With Functions[​](#partial-with-functions "Direct link to Partial With Functions") ------------------------------------------------------------------------------------------ You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z You can also just initialize the prompt with the partialed variables: const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, },});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to partially apply variables to your prompt templates. Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to retrieve the whole document for a chunk ](/v0.2/docs/how_to/parent_document_retriever)[ Next How to add chat history to a question-answering chain ](/v0.2/docs/how_to/qa_chat_history_how_to) * [Partial with strings](#partial-with-strings) * [Partial With Functions](#partial-with-functions) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/assign
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add values to a chain's state On this page How to add values to a chain's state ==================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Calling runnables in parallel](/v0.2/docs/how_to/parallel/) * [Custom functions](/v0.2/docs/how_to/functions/) * [Passing data through](/v0.2/docs/how_to/passthrough) An alternate way of [passing data through](/v0.2/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function. This is useful in the common [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step. Here’s an example: import { RunnableParallel, RunnablePassthrough,} from "@langchain/core/runnables";const runnable = RunnableParallel.from({ extra: RunnablePassthrough.assign({ mult: (input: { num: number }) => input.num * 3, modified: (input: { num: number }) => input.num + 1, }),});await runnable.invoke({ num: 1 }); { extra: { num: 1, mult: 3, modified: 2 } } Let’s break down what’s happening here. * The input to the chain is `{"num": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input. * The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{"num": 1}`), and assigns a new key called `mult`. The value is `lambda x: x["num"] * 3)`, which is `3`. Thus, the result is `{"num": 1, "mult": 3}`. * `{"num": 1, "mult": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`. * At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `"num"` from its input and adds one. Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`. Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we’ll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "harrison worked at kensho", metadata: {} }], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-4o" });const generationChain = prompt.pipe(model).pipe(new StringOutputParser());const retrievalChain = RunnableSequence.from([ { context: retriever.pipe((docs) => docs[0].pageContent), question: new RunnablePassthrough(), }, RunnablePassthrough.assign({ output: generationChain }),]);const stream = await retrievalChain.stream("where did harrison work?");for await (const chunk of stream) { console.log(chunk);} { question: "where did harrison work?" }{ context: "harrison worked at kensho" }{ output: "" }{ output: "H" }{ output: "arrison" }{ output: " worked" }{ output: " at" }{ output: " Kens" }{ output: "ho" }{ output: "." }{ output: "" } We can see that the first chunk contains the original `"question"` since that is immediately available. The second chunk contains `"context"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now you’ve learned how to pass data through your chains to help to help format the data flowing through your chains. To learn more, see the other how-to guides on runnables in this section. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to use legacy LangChain Agents (AgentExecutor) ](/v0.2/docs/how_to/agent_executor)[ Next How to attach runtime arguments to a Runnable ](/v0.2/docs/how_to/binding) * [Streaming](#streaming) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/extraction
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build an Extraction Chain On this page Build an Extraction Chain ========================= Prerequisites This guide assumes familiarity with the following concepts: * [Chat Models](/v0.2/docs/concepts/#chat-models) * [Tools](/v0.2/docs/concepts/#tools) * [Tool calling](/v0.2/docs/concepts/#function-tool-calling) In this tutorial, we will build a chain to extract structured information from unstructured text. info This tutorial will only work with models that support **function/tool calling** Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Installation[​](#installation "Direct link to Installation") To install LangChain run: * npm * yarn * pnpm npm i langchain yarn add langchain pnpm add langchain For more details, see our [Installation guide](/v0.2/docs/how_to/installation/). ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com). After you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..." The Schema[​](#the-schema "Direct link to The Schema") ------------------------------------------------------ First, we need to describe what information we want to extract from the text. We’ll use [Zod](https://zod.dev) to define an example schema that extracts personal information. * npm * yarn * pnpm npm i zod @langchain/core yarn add zod @langchain/core pnpm add zod @langchain/core import { z } from "zod";const personSchema = z.object({ name: z.string().nullish().describe("The name of the person"), hair_color: z .string() .nullish() .describe("The color of the person's hair if known"), height_in_meters: z.string().nullish().describe("Height measured in meters"),}); There are two best practices when defining schema: 1. Document the **attributes** and the **schema** itself: This information is sent to the LLM and is used to improve the quality of information extraction. 2. Do not force the LLM to make up information! Above we used `.nullish()` for the attributes allowing the LLM to output `null` or `undefined` if it doesn’t know the answer. info For best performance, document the schema well and make sure the model isn’t force to return results if there’s no information to be extracted in the text. The Extractor[​](#the-extractor "Direct link to The Extractor") --------------------------------------------------------------- Let’s create an information extractor using the schema we defined above. import { ChatPromptTemplate } from "@langchain/core/prompts";// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract,return null for the attribute's value.`, ], // Please see the how-to about improving performance with // reference examples. // ["placeholder", "{examples}"], ["human", "{text}"],]); We need to use a model that supports function/tool calling. Please review [the documentation](/v0.2/docs/concepts#function-tool-calling) for list of some models that can be used with this API. import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});const runnable = prompt.pipe(llm.withStructuredOutput(personSchema));const text = "Alan Smith is 6 feet tall and has blond hair.";await runnable.invoke({ text }); { name: "Alan Smith", hair_color: "blond", height_in_meters: "1.83" } info Extraction is Generative 🤯 LLMs are generative models, so they can do some pretty cool things like correctly extract the height of the person in meters even though it was provided in feet! We can see the LangSmith trace [here](https://smith.langchain.com/public/3d44b7e8-e7ca-4e02-951d-3290ccc89d64/r). Even though we defined our schema with the variable name `personSchema`, Zod is unable to infer this name and therefore does not pass it along to the model. To help give the LLM more clues as to what your provided schema represents, you can also give the schema you pass to `withStructuredOutput()` a name: const runnable = prompt.pipe( llm.withStructuredOutput(personSchema, { name: "person" }));const text = "Alan Smith is 6 feet tall and has blond hair.";await runnable.invoke({ text }); { name: "Alan Smith", hair_color: "blond", height_in_meters: "1.83" } This can improve performance in many cases. Multiple Entities[​](#multiple-entities "Direct link to Multiple Entities") --------------------------------------------------------------------------- In **most cases**, you should be extracting a list of entities rather than a single entity. This can be easily achieved using Zod by nesting models inside one another. import { z } from "zod";const personSchema = z.object({ name: z.string().nullish().describe("The name of the person"), hair_color: z .string() .nullish() .describe("The color of the person's hair if known"), height_in_meters: z.number().nullish().describe("Height measured in meters"),});const dataSchema = z.object({ people: z.array(personSchema).describe("Extracted data about people"),}); info Extraction might not be perfect here. Please continue to see how to use **Reference Examples** to improve the quality of extraction, and see the **guidelines** section! const runnable = prompt.pipe(llm.withStructuredOutput(dataSchema));const text = "My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me.";await runnable.invoke({ text }); { people: [ { name: "Jeff", hair_color: "black", height_in_meters: 1.83 }, { name: "Anna", hair_color: "black", height_in_meters: null } ]} tip When the schema accommodates the extraction of **multiple entities**, it also allows the model to extract **no entities** if no relevant information is in the text by providing an empty list. This is usually a **good** thing! It allows specifying **required** attributes on an entity without necessarily forcing the model to detect this entity. We can see the LangSmith trace [here](https://smith.langchain.com/public/272096ab-9ac5-43f9-aa00-3b8443477d17/r) Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ Now that you understand the basics of extraction with LangChain, you’re ready to proceed to the rest of the how-to guides: * [Add Examples](/v0.2/docs/how_to/extraction_examples): Learn how to use **reference examples** to improve performance. * [Handle Long Text](/v0.2/docs/how_to/extraction_long_text): What should you do if the text does not fit into the context window of the LLM? * [Use a Parsing Approach](/v0.2/docs/how_to/extraction_parse): Use a prompt based approach to extract with models that do not support **tool/function calling**. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Build an Agent ](/v0.2/docs/tutorials/agents)[ Next Summarize Text ](/v0.2/docs/tutorials/summarization) * [Setup](#setup) * [Installation](#installation) * [LangSmith](#langsmith) * [The Schema](#the-schema) * [The Extractor](#the-extractor) * [Multiple Entities](#multiple-entities) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/output_parser_xml
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to parse XML output On this page How to parse XML output ======================= Prerequisites This guide assumes familiarity with the following concepts: - [Chat models](/v0.2/docs/concepts/#chat-models) - [Output parsers](/v0.2/docs/concepts/#output-parsers) - [Prompt templates](/v0.2/docs/concepts/#prompt-templates) - [Structured output](/v0.2/docs/how_to/structured_output) - [Chaining runnables together](/v0.2/docs/how_to/sequence/) LLMs from different providers often have different strengths depending on the specific data they are trianed on. This also means that some may be “better” and more reliable at generating output in formats other than JSON. This guide shows you how to use the [`XMLOutputParser`](https://api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html) to prompt models for XML output, then and parse that output into a usable format. note Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed XML. In the following examples, we use Anthropic’s Claude ([https://docs.anthropic.com/claude/docs](https://docs.anthropic.com/claude/docs)), which is one such model that is optimized for XML tags. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic Let’s start with a simple request to the model. import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", maxTokens: 512, temperature: 0.1,});const query = `Generate the shortened filmograph for Tom Hanks.`;const result = await model.invoke( query + ` Please enclose the movies in "movie" tags.`);console.log(result.content); Here is the shortened filmography for Tom Hanks, with movies enclosed in "movie" tags:<movie>Forrest Gump</movie><movie>Saving Private Ryan</movie><movie>Cast Away</movie><movie>Apollo 13</movie><movie>Catch Me If You Can</movie><movie>The Green Mile</movie><movie>Toy Story</movie><movie>Toy Story 2</movie><movie>Toy Story 3</movie><movie>Toy Story 4</movie><movie>Philadelphia</movie><movie>Big</movie><movie>Sleepless in Seattle</movie><movie>You've Got Mail</movie><movie>The Terminal</movie> This actually worked pretty well! But it would be nice to parse that XML into a more easily usable format. We can use the `XMLOutputParser` to both add default format instructions to the prompt and parse outputted XML into a dict: import { XMLOutputParser } from "@langchain/core/output_parsers";// We will add these instructions to the prompt belowconst parser = new XMLOutputParser();parser.getFormatInstructions(); "The output should be formatted as a XML file.\n" + "1. Output should conform to the tags below. \n" + "2. If tag"... 434 more characters import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate( `{query}\n{format_instructions}`);const partialedPrompt = await prompt.partial({ format_instructions: parser.getFormatInstructions(),});const chain = partialedPrompt.pipe(model).pipe(parser);const output = await chain.invoke({ query: "Generate the shortened filmograph for Tom Hanks.",});console.log(JSON.stringify(output, null, 2)); { "filmography": [ { "actor": [ { "name": "Tom Hanks" }, { "films": [ { "film": [ { "title": "Forrest Gump" }, { "year": "1994" }, { "role": "Forrest Gump" } ] }, { "film": [ { "title": "Saving Private Ryan" }, { "year": "1998" }, { "role": "Captain Miller" } ] }, { "film": [ { "title": "Cast Away" }, { "year": "2000" }, { "role": "Chuck Noland" } ] }, { "film": [ { "title": "Catch Me If You Can" }, { "year": "2002" }, { "role": "Carl Hanratty" } ] }, { "film": [ { "title": "The Terminal" }, { "year": "2004" }, { "role": "Viktor Navorski" } ] } ] } ] } ]} You’ll notice above that our output is no longer just between `movie` tags. We can also add some tags to tailor the output to our needs: const parser = new XMLOutputParser({ tags: ["movies", "actor", "film", "name", "genre"],});// We will add these instructions to the prompt belowparser.getFormatInstructions(); "The output should be formatted as a XML file.\n" + "1. Output should conform to the tags below. \n" + "2. If tag"... 460 more characters You can and should experiment with adding your own formatting hints in the other parts of your prompt to either augment or replace the default instructions. Here’s the result when we invoke it: import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate( `{query}\n{format_instructions}`);const partialedPrompt = await prompt.partial({ format_instructions: parser.getFormatInstructions(),});const chain = partialedPrompt.pipe(model).pipe(parser);const output = await chain.invoke({ query: "Generate the shortened filmograph for Tom Hanks.",});console.log(JSON.stringify(output, null, 2)); { "movies": [ { "actor": [ { "film": [ { "name": "Forrest Gump" }, { "genre": "Drama" } ] }, { "film": [ { "name": "Saving Private Ryan" }, { "genre": "War" } ] }, { "film": [ { "name": "Cast Away" }, { "genre": "Drama" } ] }, { "film": [ { "name": "Catch Me If You Can" }, { "genre": "Biography" } ] }, { "film": [ { "name": "The Terminal" }, { "genre": "Comedy-drama" } ] } ] } ]} Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned how to prompt a model to return XML. Next, check out the [broader guide on obtaining structured output](/v0.2/docs/how_to/structured_output) for other related techniques. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to parse JSON output ](/v0.2/docs/how_to/output_parser_json)[ Next How to invoke runnables in parallel ](/v0.2/docs/how_to/parallel) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/parent_document_retriever
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to retrieve the whole document for a chunk On this page How to retrieve the whole document for a chunk ============================================== Prerequisites This guide assumes familiarity with the following concepts: * [Retrievers](/v0.2/docs/concepts/#retrievers) * [Text splitters](/v0.2/docs/concepts/#text-splitters) * [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag) When splitting documents for retrieval, there are often conflicting desires: 1. You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If documents are too long, then the embeddings can lose meaning. 2. You want to have long enough documents that the context of each chunk is retained. The [`ParentDocumentRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents. Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger chunk. This is a more specific form of [generating multiple embeddings per document](/v0.2/docs/how_to/multi_vector). Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { TextLoader } from "langchain/document_loaders/fs/text";const vectorstore = new MemoryVectorStore(new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Optional, not required if you're already passing in split documents parentSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 500, }), childSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 50, }), // Optional `k` parameter to search for more child documents in VectorStore. // Note that this does not exactly correspond to the number of final (parent) documents // retrieved, as multiple child documents can point to the same parent. childK: 20, // Optional `k` parameter to limit number of final, parent documents returned from this // retriever and sent to LLM. This is an upper-bound, and the final count may be lower than this. parentK: 5,});const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();// We must add the parent documents via the retriever's addDocuments methodawait retriever.addDocuments(parentDocuments);const retrievedDocs = await retriever.invoke("justice breyer");// Retrieved chunks are the larger parent chunksconsole.log(retrievedDocs);/* [ Document { pageContent: 'Tonight, I call on the Senate to pass — pass the Freedom to Vote Act. Pass the John Lewis Act — Voting Rights Act. And while you’re at it, pass the DISCLOSE Act so Americans know who is funding our elections.\n' + '\n' + 'Look, tonight, I’d — I’d like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army veteran, Constitutional scholar, retiring Justice of the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, Document { pageContent: 'As I did four days ago, I’ve nominated a Circuit Court of Appeals — Ketanji Brown Jackson. One of our nation’s top legal minds who will continue in just Brey- — Justice Breyer’s legacy of excellence. A former top litigator in private practice, a former federal public defender from a family of public-school educators and police officers — she’s a consensus builder.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, Document { pageContent: 'Justice Breyer, thank you for your service. Thank you, thank you, thank you. I mean it. Get up. Stand — let me see you. Thank you.\n' + '\n' + 'And we all know — no matter what your ideology, we all know one of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } } ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` With Score Threshold[​](#with-score-threshold "Direct link to With Score Threshold") ------------------------------------------------------------------------------------ By setting the options in `scoreThresholdOptions` we can force the `ParentDocumentRetriever` to use the `ScoreThresholdRetriever` under the hood. This sets the vector store inside `ScoreThresholdRetriever` as the one we passed when initializing `ParentDocumentRetriever`, while also allowing us to also set a score threshold for the retriever. This can be helpful when you're not sure how many documents you want (or if you are sure, just set the `maxK` option), but you want to make sure that the documents you do get are within a certain relevancy threshold. Note: if a retriever is passed, `ParentDocumentRetriever` will default to use it for retrieving small chunks, as well as adding documents via the `addDocuments` method. import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { TextLoader } from "langchain/document_loaders/fs/text";import { ScoreThresholdRetriever } from "langchain/retrievers/score_threshold";const vectorstore = new MemoryVectorStore(new OpenAIEmbeddings());const docstore = new InMemoryStore();const childDocumentRetriever = ScoreThresholdRetriever.fromVectorStore( vectorstore, { minSimilarityScore: 0.01, // Essentially no threshold maxK: 1, // Only return the top result });const retriever = new ParentDocumentRetriever({ vectorstore, docstore, childDocumentRetriever, // Optional, not required if you're already passing in split documents parentSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 500, }), childSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 50, }),});const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();// We must add the parent documents via the retriever's addDocuments methodawait retriever.addDocuments(parentDocuments);const retrievedDocs = await retriever.invoke("justice breyer");// Retrieved chunk is the larger parent chunkconsole.log(retrievedDocs);/* [ Document { pageContent: 'Tonight, I call on the Senate to pass — pass the Freedom to Vote Act. Pass the John Lewis Act — Voting Rights Act. And while you’re at it, pass the DISCLOSE Act so Americans know who is funding our elections.\n' + '\n' + 'Look, tonight, I’d — I’d like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army veteran, Constitutional scholar, retiring Justice of the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [ScoreThresholdRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_score_threshold.ScoreThresholdRetriever.html) from `langchain/retrievers/score_threshold` With Contextual chunk headers[​](#with-contextual-chunk-headers "Direct link to With Contextual chunk headers") --------------------------------------------------------------------------------------------------------------- Consider a scenario where you want to store collection of documents in a vector store and perform Q&A tasks on them. Simply splitting documents with overlapping text may not provide sufficient context for LLMs to determine if multiple chunks are referencing the same information, or how to resolve information from contradictory sources. Tagging each document with metadata is a solution if you know what to filter against, but you may not know ahead of time exactly what kind of queries your vector store will be expected to handle. Including additional contextual information directly in each chunk in the form of headers can help deal with arbitrary queries. This is particularly important if you have several fine-grained child chunks that need to be correctly retrieved from the vector store. import { OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1500, chunkOverlap: 0,});const jimDocs = await splitter.createDocuments([`My favorite color is blue.`]);const jimChunkHeaderOptions = { chunkHeader: "DOC NAME: Jim Interview\n---\n", appendChunkOverlapHeader: true,};const pamDocs = await splitter.createDocuments([`My favorite color is red.`]);const pamChunkHeaderOptions = { chunkHeader: "DOC NAME: Pam Interview\n---\n", appendChunkOverlapHeader: true,};const vectorstore = await HNSWLib.fromDocuments([], new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Very small chunks for demo purposes. // Use a bigger chunk size for serious use-cases. childSplitter: new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 0, }), childK: 50, parentK: 5,});// We pass additional option `childDocChunkHeaderOptions`// that will add the chunk header to child documentsawait retriever.addDocuments(jimDocs, { childDocChunkHeaderOptions: jimChunkHeaderOptions,});await retriever.addDocuments(pamDocs, { childDocChunkHeaderOptions: pamChunkHeaderOptions,});// This will search child documents in vector store with the help of chunk header,// returning the unmodified parent documentsconst retrievedDocs = await retriever.invoke("What is Pam's favorite color?");// Pam's favorite color is returned first!console.log(JSON.stringify(retrievedDocs, null, 2));/* [ { "pageContent": "My favorite color is red.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } } } }, { "pageContent": "My favorite color is blue.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } } } } ]*/const rawDocs = await vectorstore.similaritySearch( "What is Pam's favorite color?");// Raw docs in vectorstore are short but have chunk headersconsole.log(JSON.stringify(rawDocs, null, 2));/* [ { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) color is", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) favorite", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) red.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\nMy", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } } ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` With Reranking[​](#with-reranking "Direct link to With Reranking") ------------------------------------------------------------------ With many documents from the vector store that are passed to LLM, final answers sometimes consist of information from irrelevant chunks, making it less precise and sometimes incorrect. Also, passing multiple irrelevant documents makes it more expensive. So there are two reasons to use rerank - precision and costs. import { OpenAIEmbeddings } from "@langchain/openai";import { CohereRerank } from "@langchain/cohere";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever, type SubDocs,} from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";// init Cohere Rerank. Remember to add COHERE_API_KEY to your .envconst reranker = new CohereRerank({ topN: 50, model: "rerank-multilingual-v2.0",});export function documentCompressorFiltering({ relevanceScore,}: { relevanceScore?: number } = {}) { return (docs: SubDocs) => { let outputDocs = docs; if (relevanceScore) { const docsRelevanceScoreValues = docs.map( (doc) => doc?.metadata?.relevanceScore ); outputDocs = docs.filter( (_doc, index) => (docsRelevanceScoreValues?.[index] || 1) >= relevanceScore ); } return outputDocs; };}const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const jimDocs = await splitter.createDocuments([`Jim favorite color is blue.`]);const pamDocs = await splitter.createDocuments([`Pam favorite color is red.`]);const vectorstore = await HNSWLib.fromDocuments([], new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Very small chunks for demo purposes. // Use a bigger chunk size for serious use-cases. childSplitter: new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 0, }), childK: 50, parentK: 5, // We add Reranker documentCompressor: reranker, documentCompressorFilteringFn: documentCompressorFiltering({ relevanceScore: 0.3, }),});const docs = jimDocs.concat(pamDocs);await retriever.addDocuments(docs);// This will search for documents in vector store and return for LLM already reranked and sorted document// with appropriate minimum relevance scoreconst retrievedDocs = await retriever.invoke("What is Pam's favorite color?");// Pam's favorite color is returned first!console.log(JSON.stringify(retrievedDocs, null, 2));/* [ { "pageContent": "My favorite color is red.", "metadata": { "relevanceScore": 0.9 "loc": { "lines": { "from": 1, "to": 1 } } } } ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CohereRerank](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereRerank.html) from `@langchain/cohere` * [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document` * [SubDocs](https://v02.api.js.langchain.com/types/langchain_retrievers_parent_document.SubDocs.html) from `langchain/retrievers/parent_document` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to use the `ParentDocumentRetriever`. Next, check out the more general form of [generating multiple embeddings per document](/v0.2/docs/how_to/multi_vector), the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to invoke runnables in parallel ](/v0.2/docs/how_to/parallel)[ Next How to partially format prompt templates ](/v0.2/docs/how_to/prompts_partial) * [Usage](#usage) * [With Score Threshold](#with-score-threshold) * [With Contextual chunk headers](#with-contextual-chunk-headers) * [With Reranking](#with-reranking) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/recursive_text_splitter
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to recursively split text by characters On this page How to recursively split text by characters =========================================== Prerequisites This guide assumes familiarity with the following concepts: * [Text splitters](/v0.2/docs/concepts#text-splitters) This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. 1. How the text is split: by list of characters. 2. How the chunk size is measured: by number of characters. Below we show example usage. To obtain the string content directly, use `.splitText`. To create LangChain [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) objects (e.g., for use in downstream tasks), use `.createDocuments`. import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const output = await splitter.createDocuments([text]);console.log(output.slice(0, 3)); [ Document { pageContent: "Hi.", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "I'm", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "Harrison.", metadata: { loc: { lines: { from: 3, to: 3 } } } }] You’ll note that in the above example we are splitting a raw text string and getting back a list of documents. We can also split documents directly. import { Document } from "@langchain/core/documents";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);console.log(docOutput.slice(0, 3)); [ Document { pageContent: "Hi.", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "I'm", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "Harrison.", metadata: { loc: { lines: { from: 3, to: 3 } } } }] You can customize the `RecursiveCharacterTextSplitter` with arbitrary separators by passing a `separators` parameter like this: import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { Document } from "@langchain/core/documents";const text = `Some other considerations include:- Do you deploy your backend and frontend together, or separately?- Do you deploy your backend co-located with your database, or separately?**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.## Deployment OptionsSee below for a list of deployment options for your LangChain app. If you don't see your preferred option, please get in touch and we can add it to this list.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 50, chunkOverlap: 1, separators: ["|", "##", ">", "-"],});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);console.log(docOutput.slice(0, 3)); [ Document { pageContent: "Some other considerations include:", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "- Do you deploy your backend and frontend together", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "r, or separately?", metadata: { loc: { lines: { from: 3, to: 3 } } } }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned a method for splitting text by character. Next, check out [specific techinques for splitting on code](/v0.2/docs/how_to/code_splitter) or the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to handle cases where no queries are generated ](/v0.2/docs/how_to/query_no_queries)[ Next How to reduce retrieval latency ](/v0.2/docs/how_to/reduce_retrieval_latency) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to add chat history to a question-answering chain On this page How to add chat history to a question-answering chain ===================================================== Prerequisites This guide assumes familiarity with the following: * [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/) In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of “memory” of past questions and answers, and some logic for incorporating those into its current thinking. In this guide we focus on **adding logic for incorporating historical messages, and NOT on chat history management.** Chat history management is [covered here](/v0.2/docs/how_to/message_history). We’ll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng. We’ll need to update two things about our existing app: 1. **Prompt**: Update our prompt to support historical messages as an input. 2. **Contextualizing questions**: Add a sub-chain that takes the latest user question and reformulates it in the context of the chat history. This is needed in case the latest question references some context from past messages. For example, if a user asks a follow-up question like “Can you elaborate on the second point?”, this cannot be understood without the context of the previous message. Therefore we can’t effectively perform retrieval with a question like this. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Dependencies[​](#dependencies "Direct link to Dependencies") We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers). We’ll use the following packages: npm install --save langchain @langchain/openai cheerio We need to set environment variable `OPENAI_API_KEY`: export OPENAI_API_KEY=YOUR_KEY ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://docs.smith.langchain.com). Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY ### Initial setup[​](#initial-setup "Direct link to Initial setup") import "cheerio";import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();// Tip - you can edit this!const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = await createStuffDocumentsChain({ llm, prompt, outputParser: new StringOutputParser(),}); Let’s see what this prompt actually looks like console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n")); You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer: await ragChain.invoke({ context: await retriever.invoke("What is Task Decomposition?"), question: "What is Task Decomposition?",}); "Task Decomposition involves breaking down complex tasks into smaller and simpler steps to make them "... 243 more characters Contextualizing the question[​](#contextualizing-the-question "Direct link to Contextualizing the question") ------------------------------------------------------------------------------------------------------------ First we’ll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information. We’ll use a prompt that includes a `MessagesPlaceholder` variable under the name “chat\_history”. This allows us to pass in a list of Messages to the prompt using the “chat\_history” input key, and these messages will be inserted after the system message and before the human message containing the latest question. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const contextualizeQSystemPrompt = `Given a chat history and the latest user questionwhich might reference context in the chat history, formulate a standalone questionwhich can be understood without the chat history. Do NOT answer the question,just reformulate it if needed and otherwise return it as is.`;const contextualizeQPrompt = ChatPromptTemplate.fromMessages([ ["system", contextualizeQSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizeQChain = contextualizeQPrompt .pipe(llm) .pipe(new StringOutputParser()); Using this chain we can ask follow-up questions that reference past messages and have them reformulated into standalone questions: import { AIMessage, HumanMessage } from "@langchain/core/messages";await contextualizeQChain.invoke({ chat_history: [ new HumanMessage("What does LLM stand for?"), new AIMessage("Large language model"), ], question: "What is meant by large",}); 'What is the definition of "large" in this context?' Chain with chat history[​](#chain-with-chat-history "Direct link to Chain with chat history") --------------------------------------------------------------------------------------------- And now we can build our full QA chain. Notice we add some routing functionality to only run the “condense question chain” when our chat history isn’t empty. Here we’re taking advantage of the fact that if a function in an LCEL chain returns another chain, that chain will itself be invoked. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const qaSystemPrompt = `You are an assistant for question-answering tasks.Use the following pieces of retrieved context to answer the question.If you don't know the answer, just say that you don't know.Use three sentences maximum and keep the answer concise.{context}`;const qaPrompt = ChatPromptTemplate.fromMessages([ ["system", qaSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizedQuestion = (input: Record<string, unknown>) => { if ("chat_history" in input) { return contextualizeQChain; } return input.question;};const ragChain = RunnableSequence.from([ RunnablePassthrough.assign({ context: async (input: Record<string, unknown>) => { if ("chat_history" in input) { const chain = contextualizedQuestion(input); return chain.pipe(retriever).pipe(formatDocumentsAsString); } return ""; }, }), qaPrompt, llm,]);const chat_history = [];const question = "What is task decomposition?";const aiMsg = await ragChain.invoke({ question, chat_history });console.log(aiMsg);chat_history.push(aiMsg);const secondQuestion = "What are common ways of doing it?";await ragChain.invoke({ question: secondQuestion, chat_history }); AIMessage { lc_serializable: true, lc_kwargs: { content: "Task decomposition involves breaking down a complex task into smaller and simpler steps to make it m"... 358 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Task decomposition involves breaking down a complex task into smaller and simpler steps to make it m"... 358 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 83, promptTokens: 701, totalTokens: 784 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []} AIMessage { lc_serializable: true, lc_kwargs: { content: "Common ways of task decomposition include using simple prompting techniques like Chain of Thought (C"... 353 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Common ways of task decomposition include using simple prompting techniques like Chain of Thought (C"... 353 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 81, promptTokens: 779, totalTokens: 860 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []} See the first [LangSmith trace here](https://smith.langchain.com/public/527981c6-5018-4b68-a11a-ebcde77843e7/r) and the [second trace here](https://smith.langchain.com/public/7b97994a-ab9f-4bf3-a2e4-abb609e5610a/r) Here we’ve gone over how to add application logic for incorporating historical outputs, but we’re still manually updating the chat history and inserting it into each input. In a real Q&A application we’ll want some way of persisting chat history and some way of automatically inserting and updating it. For this we can use: * [BaseChatMessageHistory](https://v02.api.js.langchain.com/classes/langchain_core_chat_history.BaseChatMessageHistory.html): Store chat history. * [RunnableWithMessageHistory](/v0.2/docs/how_to/message_history/): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation. For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/v0.2/docs/how_to/message_history/) LCEL page. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to partially format prompt templates ](/v0.2/docs/how_to/prompts_partial)[ Next How to return citations ](/v0.2/docs/how_to/qa_citations) * [Setup](#setup) * [Dependencies](#dependencies) * [LangSmith](#langsmith) * [Initial setup](#initial-setup) * [Contextualizing the question](#contextualizing-the-question) * [Chain with chat history](#chain-with-chat-history)
null
https://js.langchain.com/v0.2/docs/how_to/caching_embeddings
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to cache embedding results On this page How to cache embedding results ============================== Prerequisites This guide assumes familiarity with the following concepts: * [Embeddings](/v0.2/docs/concepts/#embedding-models) Embeddings can be stored or temporarily cached to avoid needing to recompute them. Caching embeddings can be done using a `CacheBackedEmbeddings` instance. The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache. The main supported way to initialized a `CacheBackedEmbeddings` is the `fromBytesStore` static method. This takes in the following parameters: * `underlyingEmbeddings`: The embeddings model to use. * `documentEmbeddingCache`: The cache to use for storing document embeddings. * `namespace`: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, you could set it to the name of the embedding model used. **Attention:** Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models. In-memory[​](#in-memory "Direct link to In-memory") --------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Here's a basic test example with an in memory cache. This type of cache is primarily useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings for an extended period of time: import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "@langchain/core/stores";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { TextLoader } from "langchain/document_loaders/fs/text";const underlyingEmbeddings = new OpenAIEmbeddings();const inMemoryStore = new InMemoryStore();const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, inMemoryStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// No keys logged yet since the cache is emptyfor await (const key of inMemoryStore.yieldKeys()) { console.log(key);}let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1905ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-0023b424f5ed1271a6f5601add17c1b58b7c992772e', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-00262f72e0c2d711c6b861714ee624b28af639fdb13', 'text-embedding-ada-00262d58882330038a4e6e25ea69a938f4391541874' ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CacheBackedEmbeddings](https://v02.api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed` * [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` Redis[​](#redis "Direct link to Redis") --------------------------------------- Here's an example with a Redis cache. You'll first need to install `ioredis` as a peer dependency and pass in an initialized client: * npm * Yarn * pnpm npm install ioredis yarn add ioredis pnpm add ioredis import { Redis } from "ioredis";import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { RedisByteStore } from "@langchain/community/storage/ioredis";import { TextLoader } from "langchain/document_loaders/fs/text";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redis();const redisStore = new RedisByteStore({ client: redisClient,});const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, redisStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1808ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 33ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of redisStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c8c7' ]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CacheBackedEmbeddings](https://v02.api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss` * [RedisByteStore](https://v02.api.js.langchain.com/classes/langchain_community_storage_ioredis.RedisByteStore.html) from `@langchain/community/storage/ioredis` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to use caching to avoid recomputing embeddings. Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to attach runtime arguments to a Runnable ](/v0.2/docs/how_to/binding)[ Next How to attach callbacks to a module ](/v0.2/docs/how_to/callbacks_attach) * [In-memory](#in-memory) * [Redis](#redis) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/reduce_retrieval_latency
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to reduce retrieval latency On this page How to reduce retrieval latency =============================== Prerequisites This guide assumes familiarity with the following concepts: * [Retrievers](/v0.2/docs/concepts/#retrievers) * [Embeddings](/v0.2/docs/concepts/#embedding-models) * [Vector stores](/v0.2/docs/concepts/#vectorstores) * [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag) One way to reduce retrieval latency is through a technique called "Adaptive Retrieval". The [`MatryoshkaRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_matryoshka_retriever.MatryoshkaRetriever.html) uses the Matryoshka Representation Learning (MRL) technique to retrieve documents for a given query in two steps: * **First-pass**: Uses a lower dimensional sub-vector from the MRL embedding for an initial, fast, but less accurate search. * **Second-pass**: Re-ranks the top results from the first pass using the full, high-dimensional embedding for higher accuracy. ![Matryoshka Retriever](/v0.2/assets/images/adaptive_retrieval-2abb9f6f280c11a424ae6978d39eb011.png) It is based on this [Supabase](https://supabase.com/) blog post ["Matryoshka embeddings: faster OpenAI vector search using Adaptive Retrieval"](https://supabase.com/blog/matryoshka-embeddings). ### Setup[​](#setup "Direct link to Setup") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community To follow the example below, you need an OpenAI API key: export OPENAI_API_KEY=your-api-key We'll also be using `chroma` for our vector store. Follow the instructions [here](/v0.2/docs/integrations/vectorstores/chroma) to setup. import { MatryoshkaRetriever } from "langchain/retrievers/matryoshka_retriever";import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { faker } from "@faker-js/faker";const smallEmbeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small", dimensions: 512, // Min number for small});const largeEmbeddings = new OpenAIEmbeddings({ model: "text-embedding-3-large", dimensions: 3072, // Max number for large});const vectorStore = new Chroma(smallEmbeddings, { numDimensions: 512,});const retriever = new MatryoshkaRetriever({ vectorStore, largeEmbeddingModel: largeEmbeddings, largeK: 5,});const irrelevantDocs = Array.from({ length: 250 }).map( () => new Document({ pageContent: faker.lorem.word(7), // Similar length to the relevant docs }));const relevantDocs = [ new Document({ pageContent: "LangChain is an open source github repo", }), new Document({ pageContent: "There are JS and PY versions of the LangChain github repos", }), new Document({ pageContent: "LangGraph is a new open source library by the LangChain team", }), new Document({ pageContent: "LangChain announced GA of LangSmith last week!", }), new Document({ pageContent: "I heart LangChain", }),];const allDocs = [...irrelevantDocs, ...relevantDocs];/** * IMPORTANT: * The `addDocuments` method on `MatryoshkaRetriever` will * generate the small AND large embeddings for all documents. */await retriever.addDocuments(allDocs);const query = "What is LangChain?";const results = await retriever.invoke(query);console.log(results.map(({ pageContent }) => pageContent).join("\n"));/** I heart LangChain LangGraph is a new open source library by the LangChain team LangChain is an open source github repo LangChain announced GA of LangSmith last week! There are JS and PY versions of the LangChain github repos*/ #### API Reference: * [MatryoshkaRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_matryoshka_retriever.MatryoshkaRetriever.html) from `langchain/retrievers/matryoshka_retriever` * [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` note Due to the constraints of some vector stores, the large embedding metadata field is stringified (`JSON.stringify`) before being stored. This means that the metadata field will need to be parsed (`JSON.parse`) when retrieved from the vector store. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned a technique that can help speed up your retrieval queries. Next, check out the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to recursively split text by characters ](/v0.2/docs/how_to/recursive_text_splitter)[ Next How to route execution within a chain ](/v0.2/docs/how_to/routing) * [Setup](#setup) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/classification
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Tagging On this page Classify Text into Labels ========================= Tagging means labeling a document with classes such as: * sentiment * language * style (formal, informal etc.) * covered topics * political tendency ![Image description](/v0.2/assets/images/tagging-93990e95451d92b715c2b47066384224.png) Overview[​](#overview "Direct link to Overview") ------------------------------------------------ Tagging has a few components: * `function`: Like [extraction](/v0.2/docs/tutorials/extraction), tagging uses [functions](https://openai.com/blog/function-calling-and-other-api-updates) to specify how the model should tag a document * `schema`: defines how we want to tag the document Quickstart[​](#quickstart "Direct link to Quickstart") ------------------------------------------------------ Let’s see a very straightforward example of how we can use OpenAI tool calling for tagging in LangChain. We’ll use the `.withStructuredOutput()` method supported by OpenAI models: * npm * yarn * pnpm npm i langchain @langchain/openai @langchain/core zod yarn add langchain @langchain/openai @langchain/core zod pnpm add langchain @langchain/openai @langchain/core zod Let’s specify a [Zod](https://zod.dev) schema with a few properties and their expected type in our schema. import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const taggingPrompt = ChatPromptTemplate.fromTemplate( `Extract the desired information from the following passage.Only extract the properties mentioned in the 'Classification' function.Passage:{input}`);const classificationSchema = z.object({ sentiment: z.string().describe("The sentiment of the text"), aggressiveness: z .number() .int() .min(1) .max(10) .describe("How aggressive the text is on a scale from 1 to 10"), language: z.string().describe("The language the text is written in"),});// LLMconst llm = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo-0125",});// Name is optional, but gives the models more clues as to what your schema representsconst llmWihStructuredOutput = llm.withStructuredOutput(classificationSchema, { name: "extractor",});const taggingChain = taggingPrompt.pipe(llmWihStructuredOutput); const input = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!";await taggingChain.invoke({ input }); { sentiment: "positive", aggressiveness: 1, language: "Spanish" } As we can see in the example, it correctly interprets what we want. The results vary so that we may get, for example, sentiments in different languages (‘positive’, ‘enojado’ etc.). We will see how to control these results in the next section. Finer control[​](#finer-control "Direct link to Finer control") --------------------------------------------------------------- Careful schema definition gives us more control over the model’s output. Specifically, we can define: * possible values for each property * description to make sure that the model understands the property * required properties to be returned Let’s redeclare our Zod schema to control for each of the previously mentioned aspects using enums: import { z } from "zod";const classificationSchema = z.object({ sentiment: z .enum(["happy", "neutral", "sad"]) .describe("The sentiment of the text"), aggressiveness: z .number() .int() .min(1) .max(5) .describe( "describes how aggressive the statement is, the higher the number the more aggressive" ), language: z .enum(["spanish", "english", "french", "german", "italian"]) .describe("The language the text is written in"),}); const taggingPrompt = ChatPromptTemplate.fromTemplate( `Extract the desired information from the following passage.Only extract the properties mentioned in the 'Classification' function.Passage:{input}`);// LLMconst llm = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo-0125",});const llmWihStructuredOutput = llm.withStructuredOutput(classificationSchema, { name: "extractor",});const chain = taggingPrompt.pipe(llmWihStructuredOutput); Now the answers will be restricted in a way we expect! const input = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!";await chain.invoke({ input }); { sentiment: "happy", aggressiveness: 3, language: "spanish" } const input = "Estoy muy enojado con vos! Te voy a dar tu merecido!";await chain.invoke({ input }); { sentiment: "sad", aggressiveness: 5, language: "spanish" } const input = "Weather is ok here, I can go outside without much more than a coat";await chain.invoke({ input }); { sentiment: "neutral", aggressiveness: 3, language: "english" } The [LangSmith trace](https://smith.langchain.com/public/455f5404-8784-49ce-8851-0619b99e936f/r) lets us peek under the hood: ![](/v0.2/assets/images/classification_ls_trace-7b269b067c3751c6d06289c560505656.png) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Summarize Text ](/v0.2/docs/tutorials/summarization)[ Next Build a Local RAG Application ](/v0.2/docs/tutorials/local_rag) * [Overview](#overview) * [Quickstart](#quickstart) * [Finer control](#finer-control)
null
https://js.langchain.com/v0.2/docs/tutorials/summarization
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Summarize Text Summarize Text ============== A common use case is wanting to summarize long documents. This naturally runs into the context window limitations. Unlike in question-answering, you can't just do some semantic search hacks to only select the chunks of text most relevant to the question (because, in this case, there is no particular question - you want to summarize everything). So what do you do then? To get started, we would recommend checking out the summarization chain, which attacks this problem in a recursive manner. * [Summarization Chain](https://js.langchain.com/v0.1/docs/modules/chains/popular/summarize) Example[​](#example "Direct link to Example") --------------------------------------------- Here's an example of how you can use the [RefineDocumentsChain](https://js.langchain.com/v0.1/docs/modules/chains/document/refine) to summarize documents loaded from a YouTube video: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic import { loadSummarizationChain } from "langchain/chains";import { SearchApiLoader } from "@langchain/community/document_loaders/web/searchapi";import { TokenTextSplitter } from "@langchain/textsplitters";import { PromptTemplate } from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";const loader = new SearchApiLoader({ engine: "youtube_transcripts", video_id: "WTOm65IZneg",});const docs = await loader.load();const splitter = new TokenTextSplitter({ chunkSize: 10000, chunkOverlap: 250,});const docsSummary = await splitter.splitDocuments(docs);const llmSummary = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0.3,});const summaryTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.Below you find the transcript of a podcast:--------{text}--------The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Make these questions very specific.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_PROMPT = PromptTemplate.fromTemplate(summaryTemplate);const summaryRefineTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.We have provided an existing summary up to a certain point: {existing_answer}Below you find the transcript of a podcast:--------{text}--------Given the new context, refine the summary and example questions.The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Makethese questions very specific.If the context isn't useful, return the original summary and questions.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_REFINE_PROMPT = PromptTemplate.fromTemplate( summaryRefineTemplate);const summarizeChain = loadSummarizationChain(llmSummary, { type: "refine", verbose: true, questionPrompt: SUMMARY_PROMPT, refinePrompt: SUMMARY_REFINE_PROMPT,});const summary = await summarizeChain.run(docsSummary);console.log(summary);/* Here is a summary of the key points from the podcast transcript: - Jimmy helps provide hearing aids and cochlear implants to deaf and hard-of-hearing people who can't afford them. He helps over 1,000 people hear again. - Jimmy surprises recipients with $10,000 cash gifts in addition to the hearing aids. He also gifts things like jet skis, basketball game tickets, and trips to concerts. - Jimmy travels internationally to provide hearing aids, visiting places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. - Jimmy donates $100,000 to organizations around the world that teach sign language. - The recipients are very emotional and grateful to be able to hear their loved ones again. Here are some example questions and answers about the podcast: Q: How many people did Jimmy help regain their hearing? A: Jimmy helped over 1,000 people regain their hearing. Q: What types of hearing devices did Jimmy provide to the recipients? A: Jimmy provided cutting-edge hearing aids and cochlear implants. Q: In addition to the hearing devices, what surprise gifts did Jimmy give some recipients? A: In addition to hearing devices, Jimmy surprised some recipients with $10,000 cash gifts, jet skis, basketball game tickets, and concert tickets. Q: What countries did Jimmy travel to in order to help people? A: Jimmy traveled to places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. Q: How much money did Jimmy donate to organizations that teach sign language? A: Jimmy donated $100,000 to sign language organizations around the world. Q: How did the recipients react when they were able to hear again? A: The recipients were very emotional and grateful, with many crying tears of joy at being able to hear their loved ones again.*/ #### API Reference: * [loadSummarizationChain](https://v02.api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains` * [SearchApiLoader](https://v02.api.js.langchain.com/classes/langchain_community_document_loaders_web_searchapi.SearchApiLoader.html) from `@langchain/community/document_loaders/web/searchapi` * [TokenTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) from `@langchain/textsplitters` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Build an Extraction Chain ](/v0.2/docs/tutorials/extraction)[ Next Tagging ](/v0.2/docs/tutorials/classification)
null
https://js.langchain.com/v0.2/docs/how_to/routing
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to route execution within a chain On this page How to route execution within a chain ===================================== Prerequisites This guide assumes familiarity with the following concepts: * [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language) * [Chaining runnables](/v0.2/docs/how_to/sequence/) * [Configuring chain parameters at runtime](/v0.2/docs/how_to/binding) * [Prompt templates](/v0.2/docs/concepts/#prompt-templates) * [Chat Messages](/v0.2/docs/concepts/#message-types) This guide covers how to do routing in the LangChain Expression Language. Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs. There are two ways to perform routing: 1. Conditionally return runnables from a [`RunnableLambda`](/v0.2/docs/how_to/functions) (recommended) 2. Using a `RunnableBranch` (legacy) We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about LangChain, Anthropic, or Other, then routes to a corresponding prompt chain. Using a custom function[​](#using-a-custom-function "Direct link to Using a custom function") --------------------------------------------------------------------------------------------- You can use a custom function to route between different outputs. Here's an example: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = ChatPromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = ChatPromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = ChatPromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const route = ({ topic }: { input: string; topic: string }) => { if (topic.toLowerCase().includes("anthropic")) { return anthropicChain; } else if (topic.toLowerCase().includes("langchain")) { return langChainChain; } else { return generalChain; }};const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, route,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/ #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Routing by semantic similarity[​](#routing-by-semantic-similarity "Direct link to Routing by semantic similarity") ------------------------------------------------------------------------------------------------------------------ One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example: import { ChatAnthropic } from "@langchain/anthropic";import { OpenAIEmbeddings } from "@langchain/openai";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { cosineSimilarity } from "@langchain/core/utils/math";const physicsTemplate = `You are a very smart physics professor.You are great at answering questions about physics in a concise and easy to understand manner.When you don't know the answer to a question you admit that you don't know.Do not use more than 100 words.Here is a question:{query}`;const mathTemplate = `"You are a very good mathematician. You are great at answering math questions.You are so good because you are able to break down hard problems into their component parts,answer the component parts, and then put them together to answer the broader question.Do not use more than 100 words.Here is a question:{query}`;const embeddings = new OpenAIEmbeddings({});const templates = [physicsTemplate, mathTemplate];const templateEmbeddings = await embeddings.embedDocuments(templates);const promptRouter = async (query: string) => { const queryEmbedding = await embeddings.embedQuery(query); const similarity = cosineSimilarity([queryEmbedding], templateEmbeddings)[0]; const isPhysicsQuestion = similarity[0] > similarity[1]; let promptTemplate: ChatPromptTemplate; if (isPhysicsQuestion) { console.log(`Using physics prompt`); promptTemplate = ChatPromptTemplate.fromTemplate(templates[0]); } else { console.log(`Using math prompt`); promptTemplate = ChatPromptTemplate.fromTemplate(templates[1]); } return promptTemplate.invoke({ query });};const chain = RunnableSequence.from([ promptRouter, new ChatAnthropic({ model: "claude-3-haiku-20240307" }), new StringOutputParser(),]);console.log(await chain.invoke("what's a black hole?"));/* Using physics prompt*//* A black hole is a region in space where the gravitational pull is so strong that nothing, not even light, can escape from it. It is the result of the gravitational collapse of a massive star, creating a singularity surrounded by an event horizon, beyond which all information is lost. Black holes have fascinated scientists for decades, as they provide insights into the most extreme conditions in the universe and the nature of gravity itself. While we understand the basic properties of black holes, there are still many unanswered questions about their behavior and their role in the cosmos.*/console.log(await chain.invoke("what's a path integral?"));/* Using math prompt*//* A path integral is a mathematical formulation in quantum mechanics used to describe the behavior of a particle or system. It considers all possible paths the particle can take between two points, and assigns a probability amplitude to each path. By summing up the contributions from all paths, it provides a comprehensive understanding of the particle's quantum mechanical behavior. This approach allows for the calculation of complex quantum phenomena, such as quantum tunneling and interference effects, making it a powerful tool in theoretical physics.*/ #### API Reference: * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [cosineSimilarity](https://v02.api.js.langchain.com/functions/langchain_core_utils_math.cosineSimilarity.html) from `@langchain/core/utils/math` Using a RunnableBranch[​](#using-a-runnablebranch "Direct link to Using a RunnableBranch") ------------------------------------------------------------------------------------------ A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. If no provided conditions match, it runs the default runnable. Here's an example of what it looks like in action: import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableBranch, RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = ChatPromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = ChatPromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = ChatPromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const branch = RunnableBranch.from([ [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("anthropic"), anthropicChain, ], [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("langchain"), langChainChain, ], generalChain,]);const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, branch,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/ #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [RunnableBranch](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableBranch.html) from `@langchain/core/runnables` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You've now learned how to add routing to your composed LCEL chains. Next, check out the other [how-to guides on runnables](/v0.2/docs/how_to/#langchain-expression-language-lcel) in this section. * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to reduce retrieval latency ](/v0.2/docs/how_to/reduce_retrieval_latency)[ Next How to do "self-querying" retrieval ](/v0.2/docs/how_to/self_query) * [Using a custom function](#using-a-custom-function) * [Routing by semantic similarity](#routing-by-semantic-similarity) * [Using a RunnableBranch](#using-a-runnablebranch) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/how_to/qa_citations
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to return citations On this page How to return citations ======================= Prerequisites This guide assumes familiarity with the following: * [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/) * [Returning structured data from a model](/v0.2/docs/how_to/structured_output/) How can we get a model to cite which parts of the source documents it referenced in its response? To explore some techniques for extracting citations, let’s first create a simple RAG chain. To start we’ll just retrieve from the web using the [`TavilySearchAPIRetriever`](https://api.js.langchain.com/classes/langchain_community_retrievers_tavily_search_api.TavilySearchAPIRetriever.html). Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Dependencies[​](#dependencies "Direct link to Dependencies") We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models/), and [VectorStore](/v0.2/docs/concepts#vectorstores/) or [Retriever](/v0.2/docs/concepts#retrievers). We’ll use the following packages: npm install --save langchain @langchain/community @langchain/openai We need to set environment variables for Tavily Search & OpenAI: export OPENAI_API_KEY=YOUR_KEYexport TAVILY_API_KEY=YOUR_KEY ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/). Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY ### Initial setup[​](#initial-setup "Direct link to Initial setup") import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const retriever = new TavilySearchAPIRetriever({ k: 6,});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You're a helpful AI assistant. Given a user question and some web article snippets, answer the user question. If none of the articles answer the question, just say you don't know.\n\nHere are the web articles:{context}", ], ["human", "{question}"],]); Now that we’ve got a model, retriever and prompt, let’s chain them all together. We’ll need to add some logic for formatting our retrieved `Document`s to a string that can be passed to our prompt. We’ll make it so our chain returns both the answer and the retrieved Documents. import { Document } from "@langchain/core/documents";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables";/** * Format the documents into a readable string. */const formatDocs = (input: Record<string, any>): string => { const { docs } = input; return ( "\n\n" + docs .map( (doc: Document) => `Article title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}` ) .join("\n\n") );};// subchain for generating an answer once we've done retrievalconst answerChain = prompt.pipe(llm).pipe(new StringOutputParser());const map = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain = map .assign({ context: formatDocs }) .assign({ answer: answerChain }) .pick(["answer", "docs"]);await chain.invoke("How fast are cheetahs?"); { answer: "Cheetahs are the fastest land animals on Earth. They can reach speeds as high as 75 mph or 120 km/h."... 124 more characters, docs: [ Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute  3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.96283, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.96052, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.93137, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.91385, images: null } }, Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.90358, images: null } }, Document { pageContent: "If a lion comes along, the cheetah will abandon its catch -- it can't fight off a lion, and chances "... 911 more characters, metadata: { title: "What makes a cheetah run so fast? | HowStuffWorks", source: "https://animals.howstuffworks.com/mammals/cheetah-speed.htm", score: 0.87824, images: null } } ]} See a LangSmith trace [here](https://smith.langchain.com/public/bb0ed37e-b2be-4ae9-8b0d-ce2aff0b4b5e/r) that shows off the internals. Tool calling[​](#tool-calling "Direct link to Tool calling") ------------------------------------------------------------ ### Cite documents[​](#cite-documents "Direct link to Cite documents") Let’s try using [tool calling](/v0.2/docs/how_to/tool_calling) to make the model specify which of the provided documents it’s actually referencing when answering. LangChain has some utils for converting objects or [Zod](https://zod.dev) objects to the JSONSchema format expected by providers like OpenAI. We’ll use the [`.withStructuredOutput()`](/v0.2/docs/how_to/structured_output/) method to get the model to output data matching our desired schema: import { z } from "zod";const llmWithTool1 = llm.withStructuredOutput( z .object({ answer: z .string() .describe( "The answer to the user question, which is based only on the given sources." ), citations: z .array(z.number()) .describe( "The integer IDs of the SPECIFIC sources which justify the answer." ), }) .describe("A cited source from the given text"), { name: "cited_answers", });const exampleQ = `What is Brian's height?Source: 1Information: Suzy is 6'2"Source: 2Information: Jeremiah is blondeSource: 3Information: Brian is 3 inches shorter than Suzy`;await llmWithTool1.invoke(exampleQ); { answer: `Brian is 6'2" - 3 inches = 5'11" tall.`, citations: [ 1, 3 ]} See a LangSmith trace [here](https://smith.langchain.com/public/28736c75-122e-4deb-9916-55c73eea3167/r) that shows off the internals Now we’re ready to put together our chain import { Document } from "@langchain/core/documents";const formatDocsWithId = (docs: Array<Document>): string => { return ( "\n\n" + docs .map( (doc: Document, idx: number) => `Source ID: ${idx}\nArticle title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}` ) .join("\n\n") );};// subchain for generating an answer once we've done retrievalconst answerChain1 = prompt.pipe(llmWithTool1);const map1 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain1 = map1 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs), }) .assign({ cited_answer: answerChain1 }) .pick(["cited_answer", "docs"]);await chain1.invoke("How fast are cheetahs?"); { cited_answer: { answer: "Cheetahs can reach speeds as high as 75 mph or 120 km/h.", citations: [ 1, 2, 5 ] }, docs: [ Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.97858, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.97213, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.95759, images: null } }, Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute  3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.92422, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.91867, images: null } }, Document { pageContent: "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn"... 2527 more characters, metadata: { title: "Cheetah - Wikipedia", source: "https://en.wikipedia.org/wiki/Cheetah", score: 0.81617, images: null } } ]} See a LangSmith trace [here](https://smith.langchain.com/public/86814255-b9b0-4c4f-9463-e795c9961451/r) that shows off the internals. ### Cite snippets[​](#cite-snippets "Direct link to Cite snippets") What if we want to cite actual text spans? We can try to get our model to return these, too. **Note**: Note that if we break up our documents so that we have many documents with only a sentence or two instead of a few long documents, citing documents becomes roughly equivalent to citing snippets, and may be easier for the model because the model just needs to return an identifier for each snippet instead of the actual text. We recommend trying both approaches and evaluating. import { Document } from "@langchain/core/documents";const citationSchema = z.object({ sourceId: z .number() .describe( "The integer ID of a SPECIFIC source which justifies the answer." ), quote: z .string() .describe( "The VERBATIM quote from the specified source that justifies the answer." ),});const llmWithTool2 = llm.withStructuredOutput( z.object({ answer: z .string() .describe( "The answer to the user question, which is based only on the given sources." ), citations: z .array(citationSchema) .describe("Citations from the given sources that justify the answer."), }), { name: "quoted_answer", });const answerChain2 = prompt.pipe(llmWithTool2);const map2 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain2 = map2 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs), }) .assign({ quoted_answer: answerChain2 }) .pick(["quoted_answer", "docs"]);await chain2.invoke("How fast are cheetahs?"); { quoted_answer: { answer: "Cheetahs can reach speeds of up to 120kph or 75mph, making them the world’s fastest land animals.", citations: [ { sourceId: 5, quote: "Cheetahs can reach speeds of up to 120kph or 75mph, making them the world’s fastest land animals." }, { sourceId: 1, quote: "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as hi"... 25 more characters }, { sourceId: 3, quote: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 72 more characters } ] }, docs: [ Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute  3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.95973, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.92749, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.92417, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.92341, images: null } }, Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.90025, images: null } }, Document { pageContent: "In fact, they are more closely related to kangaroos…\n" + "Read more\n" + "Animals on the Galapagos Islands: A G"... 987 more characters, metadata: { title: "How fast can cheetahs run, and what enables their incredible speed?", source: "https://wildlifefaq.com/cheetah-speed/", score: 0.87121, images: null } } ]} You can check out a LangSmith trace [here](https://smith.langchain.com/public/f0588adc-1914-45e8-a2ed-4fa028cea0e1/r) that shows off the internals. Direct prompting[​](#direct-prompting "Direct link to Direct prompting") ------------------------------------------------------------------------ Not all models support tool-calling. We can achieve similar results with direct prompting. Let’s see what this looks like using an older Anthropic chat model that is particularly proficient in working with XML: ### Setup[​](#setup-1 "Direct link to Setup") Install the LangChain Anthropic integration package: npm install @langchain/anthropic Add your Anthropic API key to your environment: export ANTHROPIC_API_KEY=YOUR_KEY import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { XMLOutputParser } from "@langchain/core/output_parsers";import { Document } from "@langchain/core/documents";import { RunnableLambda, RunnablePassthrough, RunnableMap,} from "@langchain/core/runnables";const anthropic = new ChatAnthropic({ model: "claude-instant-1.2", temperature: 0,});const system = `You're a helpful AI assistant. Given a user question and some web article snippets,answer the user question and provide citations. If none of the articles answer the question, just say you don't know.Remember, you must return both an answer and citations. A citation consists of a VERBATIM quote thatjustifies the answer and the ID of the quote article. Return a citation for every quote across all articlesthat justify the answer. Use the following format for your final output:<cited_answer> <answer></answer> <citations> <citation><source_id></source_id><quote></quote></citation> <citation><source_id></source_id><quote></quote></citation> ... </citations></cited_answer>Here are the web articles:{context}`;const anthropicPrompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const formatDocsToXML = (docs: Array<Document>): string => { const formatted: Array<string> = []; docs.forEach((doc, idx) => { const docStr = `<source id="${idx}"> <title>${doc.metadata.title}</title> <article_snippet>${doc.pageContent}</article_snippet></source>`; formatted.push(docStr); }); return `\n\n<sources>${formatted.join("\n")}</sources>`;};const format3 = new RunnableLambda({ func: (input: { docs: Array<Document> }) => formatDocsToXML(input.docs),});const answerChain = anthropicPrompt .pipe(anthropic) .pipe(new XMLOutputParser()) .pipe( new RunnableLambda({ func: (input: { cited_answer: any }) => input.cited_answer, }) );const map3 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});const chain3 = map3 .assign({ context: format3 }) .assign({ cited_answer: answerChain }) .pick(["cited_answer", "docs"]);const res = await chain3.invoke("How fast are cheetahs?");console.log(JSON.stringify(res, null, 2)); { "cited_answer": [ { "answer": "Cheetahs can reach top speeds of around 75 mph, but can only maintain bursts of speed for short distances before tiring." }, { "citations": [ { "citation": [ { "source_id": "1" }, { "quote": "Scientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower." } ] }, { "citation": [ { "source_id": "3" }, { "quote": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey." } ] } ] } ], "docs": [ { "pageContent": "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the magazine's November 2012 iPad edition. See the other: http:...", "metadata": { "title": "The Science of a Cheetah's Speed | National Geographic", "source": "https://www.youtube.com/watch?v=icFMTB0Pi0g", "score": 0.96603, "images": null } }, { "pageContent": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack.\n Key Takeaways: How Fast Can a Cheetah Run?\nFastest Cheetah on Earth\nScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:\nThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.\n", "metadata": { "title": "How Fast Can a Cheetah Run? - ThoughtCo", "source": "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", "score": 0.96212, "images": null } }, { "pageContent": "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the cheetahs, the leopards and all the other wildlife of the scattered savannas and other habitats of Africa and Asia.\n Their tough paw pads and grippy claws are made to grab at the ground, and their large nasal passages and lungs facilitate the flow of oxygen and allow their rapid intake of air as they reach their top speeds.\n And though the two cats share a similar coloration, a cheetah's spots are circular while a leopard's spots are rose-shaped \"rosettes,\" with the centers of their spots showing off the tan color of their coats.\n Also classified as \"vulnerable\" are two of the cheetah's foremost foes, the lion and the leopard, the latter of which is commonly confused for the cheetah thanks to its own flecked fur.\n The cats are also consumers of the smallest of the bigger, bulkier antelopes, such as sables and kudus, and are known to gnaw on the occasional rabbit or bird.\n", "metadata": { "title": "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", "source": "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-about-the-worlds-quickest", "score": 0.95688, "images": null } }, { "pageContent": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.\ncheetah,\n(Acinonyx jubatus),\none of the world’s most-recognizable cats, known especially for its speed. Their fur is dark and includes a thick yellowish gray mane along the back, a trait that presumably offers better camouflage and increased protection from high temperatures during the day and low temperatures at night during the first few months of life. Cheetahs eat a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan).\n A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.", "metadata": { "title": "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", "source": "https://www.britannica.com/animal/cheetah-mammal", "score": 0.95589, "images": null } }, { "pageContent": "Contact Us − +\nAddress\nSmithsonian's National Zoo & Conservation Biology Institute  3001 Connecticut Ave., NW  Washington, DC 20008\nAbout the Zoo\n−\n+\nCareers\n−\n+\nNews & Media\n−\n+\nFooter Donate\n−\n+\nShop\n−\n+\nFollow us on social media\nSign Up for Emails\nFooter - SI logo, privacy, terms Conservation Efforts\nHistorically, cheetahs ranged widely throughout Africa and Asia, from the Cape of Good Hope to the Mediterranean, throughout the Arabian Peninsula and the Middle East, from Israel, India and Pakistan north to the northern shores of the Caspian and Aral Seas, and west through Uzbekistan, Turkmenistan, Afghanistan, and Pakistan into central India. Header Links\nToday's hours: 8 a.m. to 4 p.m. (last entry 3 p.m.)\nMega menu\nAnimals Global Nav Links\nElephant Cam\nSee the Smithsonian's National Zoo's Asian elephants — Spike, Bozie, Kamala, Swarna and Maharani — both inside the Elephant Community Center and outside in their yards.\n Conservation Global Nav Links\nAbout the Smithsonian Conservation Biology Institute\nCheetah\nAcinonyx jubatus\nBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun Facts\nConservation Status\nCheetah News\nTaxonomic Information\nAnimal News\nNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.\n", "metadata": { "title": "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", "source": "https://nationalzoo.si.edu/animals/cheetah", "score": 0.94744, "images": null } }, { "pageContent": "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]\nOne stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chase. In December 2016 the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.[25]\nThe cheetah was reintroduced in Malawi in 2017.[160]\nIn Asia\nIn 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, UNDP and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey.[161][162] Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.[47][60][122]\nCheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day.[123] Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon.[61][86] The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.[2][87]\nSpeed and acceleration\nThe cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull.[60][65] A study suggested that the limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs.[77]\nThe cheetah has a total of 30 teeth; the dental formula is 3.1.3.13.1.2.1.", "metadata": { "title": "Cheetah - Wikipedia", "source": "https://en.wikipedia.org/wiki/Cheetah", "score": 0.81312, "images": null } } ]} Check out this LangSmith trace [here](https://smith.langchain.com/public/e2e938e8-f847-4ea8-bc84-43d4eaf8e524/r) for more on the internals. Retrieval post-processing[​](#retrieval-post-processing "Direct link to Retrieval post-processing") --------------------------------------------------------------------------------------------------- Another approach is to post-process our retrieved documents to compress the content, so that the source content is already minimal enough that we don’t need the model to cite specific sources or spans. For example, we could break up each document into a sentence or two, embed those and keep only the most relevant ones. LangChain has some built-in components for this. Here we’ll use a [`RecursiveCharacterTextSplitter`](https://js.langchain.com/v0.2/docs/how_to/recursive_text_splitter), which creates chunks of a specified size by splitting on separator substrings, and an [`EmbeddingsFilter`](https://js.langchain.com/v0.2/docs/how_to/contextual_compression), which keeps only the texts with the most relevant embeddings. import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";import { OpenAIEmbeddings } from "@langchain/openai";import { DocumentInterface } from "@langchain/core/documents";import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables";const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 400, chunkOverlap: 0, separators: ["\n\n", "\n", ".", " "], keepSeparator: false,});const compressor = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), k: 10,});const splitAndFilter = async (input): Promise<Array<DocumentInterface>> => { const { docs, question } = input; const splitDocs = await splitter.splitDocuments(docs); const statefulDocs = await compressor.compressDocuments(splitDocs, question); return statefulDocs;};const retrieveMap = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});const retriever = retrieveMap.pipe(splitAndFilter);const docs = await retriever.invoke("How fast are cheetahs?");for (const doc of docs) { console.log(doc.pageContent, "\n\n");} The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.cheetah,(Acinonyx jubatus),The science of cheetah speedThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack. Key Takeaways: How Fast Can a Cheetah Run?Fastest Cheetah on EarthBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun FactsConservation StatusCheetah NewsTaxonomic InformationAnimal NewsNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]The cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skullScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:One stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chaseThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.Cheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day See the LangSmith trace [here](https://smith.langchain.com/public/ae6b1f52-c1fe-49ec-843c-92edf2104652/r) to see the internals. const chain4 = retrieveMap .assign({ context: formatDocs }) .assign({ answer: answerChain }) .pick(["answer", "docs"]);// Note the documents have an article "summary" in the metadata that is now much longer than the// actual document page content. This summary isn't actually passed to the model.const res = await chain4.invoke("How fast are cheetahs?");console.log(JSON.stringify(res, null, 2)); { "answer": [ { "answer": "\nCheetahs are the fastest land animals. They can reach top speeds between 75-81 mph (120-130 km/h). \n" }, { "citations": [ { "citation": [ { "source_id": "Article title: How Fast Can a Cheetah Run? - ThoughtCo" }, { "quote": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h." } ] }, { "citation": [ { "source_id": "Article title: Cheetah - Wikipedia" }, { "quote": "Scientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower." } ] } ] } ], "docs": [ { "pageContent": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack.\n Key Takeaways: How Fast Can a Cheetah Run?\nFastest Cheetah on Earth\nScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:\nThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.\n", "metadata": { "title": "How Fast Can a Cheetah Run? - ThoughtCo", "source": "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", "score": 0.96949, "images": null } }, { "pageContent": "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]\nOne stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chase. In December 2016 the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.[25]\nThe cheetah was reintroduced in Malawi in 2017.[160]\nIn Asia\nIn 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, UNDP and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey.[161][162] Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.[47][60][122]\nCheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day.[123] Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon.[61][86] The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.[2][87]\nSpeed and acceleration\nThe cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull.[60][65] A study suggested that the limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs.[77]\nThe cheetah has a total of 30 teeth; the dental formula is 3.1.3.13.1.2.1.", "metadata": { "title": "Cheetah - Wikipedia", "source": "https://en.wikipedia.org/wiki/Cheetah", "score": 0.96423, "images": null } }, { "pageContent": "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the magazine's November 2012 iPad edition. See the other: http:...", "metadata": { "title": "The Science of a Cheetah's Speed | National Geographic", "source": "https://www.youtube.com/watch?v=icFMTB0Pi0g", "score": 0.96071, "images": null } }, { "pageContent": "Contact Us − +\nAddress\nSmithsonian's National Zoo & Conservation Biology Institute  3001 Connecticut Ave., NW  Washington, DC 20008\nAbout the Zoo\n−\n+\nCareers\n−\n+\nNews & Media\n−\n+\nFooter Donate\n−\n+\nShop\n−\n+\nFollow us on social media\nSign Up for Emails\nFooter - SI logo, privacy, terms Conservation Efforts\nHistorically, cheetahs ranged widely throughout Africa and Asia, from the Cape of Good Hope to the Mediterranean, throughout the Arabian Peninsula and the Middle East, from Israel, India and Pakistan north to the northern shores of the Caspian and Aral Seas, and west through Uzbekistan, Turkmenistan, Afghanistan, and Pakistan into central India. Header Links\nToday's hours: 8 a.m. to 4 p.m. (last entry 3 p.m.)\nMega menu\nAnimals Global Nav Links\nElephant Cam\nSee the Smithsonian's National Zoo's Asian elephants — Spike, Bozie, Kamala, Swarna and Maharani — both inside the Elephant Community Center and outside in their yards.\n Conservation Global Nav Links\nAbout the Smithsonian Conservation Biology Institute\nCheetah\nAcinonyx jubatus\nBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun Facts\nConservation Status\nCheetah News\nTaxonomic Information\nAnimal News\nNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.\n", "metadata": { "title": "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", "source": "https://nationalzoo.si.edu/animals/cheetah", "score": 0.91577, "images": null } }, { "pageContent": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.\ncheetah,\n(Acinonyx jubatus),\none of the world’s most-recognizable cats, known especially for its speed. Their fur is dark and includes a thick yellowish gray mane along the back, a trait that presumably offers better camouflage and increased protection from high temperatures during the day and low temperatures at night during the first few months of life. Cheetahs eat a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan).\n A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.", "metadata": { "title": "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", "source": "https://www.britannica.com/animal/cheetah-mammal", "score": 0.91163, "images": null } }, { "pageContent": "If a lion comes along, the cheetah will abandon its catch -- it can't fight off a lion, and chances are, the cheetah will lose its life along with its prey if it doesn't get out of there fast enough.\n Advertisement\nLots More Information\nMore Great Links\nSources\nPlease copy/paste the following text to properly cite this HowStuffWorks.com article:\nAdvertisement\nAdvertisement\nAdvertisement\nAdvertisement\nAdvertisement If confronted, a roughly 125-pound cheetah will always run rather than fight -- it's too weak, light and thin to have any chance against something like a lion, which can be twice as long as a cheetah and weigh more than 400 pounds (181.4 kg) Cheetah moms spend a lot of time teaching their cubs to chase, sometimes dragging live animals back to the den so the cubs can practice the chase-and-catch process.\n It's more like a bound at that speed, completing up to three strides per second, with only one foot on the ground at any time and several stages when feet don't touch the ground at all.", "metadata": { "title": "What makes a cheetah run so fast? | HowStuffWorks", "source": "https://animals.howstuffworks.com/mammals/cheetah-speed.htm", "score": 0.89019, "images": null } } ]} Check out the LangSmith trace [here](https://smith.langchain.com/public/b767cca0-6061-4208-99f2-7f522b94a587/r) to see the internals. Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now learned a few ways to return citations from your QA chains. Next, check out some of the other guides in this section, such as [how to add chat history](/v0.2/docs/how_to/qa_chat_history_how_to). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to add chat history to a question-answering chain ](/v0.2/docs/how_to/qa_chat_history_how_to)[ Next How to return sources ](/v0.2/docs/how_to/qa_sources) * [Setup](#setup) * [Dependencies](#dependencies) * [LangSmith](#langsmith) * [Initial setup](#initial-setup) * [Tool calling](#tool-calling) * [Cite documents](#cite-documents) * [Cite snippets](#cite-snippets) * [Direct prompting](#direct-prompting) * [Setup](#setup-1) * [Retrieval post-processing](#retrieval-post-processing) * [Next steps](#next-steps)
null
https://js.langchain.com/v0.2/docs/tutorials/local_rag
* [](/v0.2/) * [Tutorials](/v0.2/docs/tutorials/) * Build a Local RAG Application On this page Build a Local RAG Application ============================= The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally. LangChain has integrations with many open-source LLMs that can be run locally. For example, here we show how to run `OllamaEmbeddings` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM. Document Loading[​](#document-loading "Direct link to Document Loading") ------------------------------------------------------------------------ First, install packages needed for local embeddings and vector storage. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Dependencies[​](#dependencies "Direct link to Dependencies") We’ll use the following packages: npm install --save langchain @langchain/community cheerio ### LangSmith[​](#langsmith "Direct link to LangSmith") Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/). Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces: export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY ### Initial setup[​](#initial-setup "Direct link to Initial setup") Load and split an example document. We’ll use a blog post on agents as an example. import "cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const allSplits = await textSplitter.splitDocuments(docs);console.log(allSplits.length); 146 Next, we’ll use `OllamaEmbeddings` for our local embeddings. Follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance. import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";import { MemoryVectorStore } from "langchain/vectorstores/memory";const embeddings = new OllamaEmbeddings();const vectorStore = await MemoryVectorStore.fromDocuments( allSplits, embeddings); Test similarity search is working with our local embeddings. const question = "What are the approaches to Task Decomposition?";const docs = await vectorStore.similaritySearch(question);console.log(docs.length); 4 Model[​](#model "Direct link to Model") --------------------------------------- ### LLaMA2[​](#llama2 "Direct link to LLaMA2") For local LLMs we’ll use also use `ollama`. import { ChatOllama } from "@langchain/community/chat_models/ollama";const ollamaLlm = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value}); const response = await ollamaLlm.invoke( "Simulate a rap battle between Stephen Colbert and John Oliver");console.log(response.content); [The stage is set for a fierce rap battle between two of the funniest men on television. Stephen Colbert and John Oliver are standing face to face, each with their own microphone and confident smirk on their face.]Stephen Colbert:Yo, John Oliver, I heard you've been talking smackAbout my show and my satire, saying it's all fakeBut let me tell you something, brother, I'm the real dealI've been making fun of politicians for years, with no concealJohn Oliver:Oh, Stephen, you think you're so clever and smartBut your jokes are stale and your delivery's a work of artYou're just a pale imitation of the real deal, Jon StewartI'm the one who's really making waves, while you're just a little birdStephen Colbert:Well, John, I may not be as loud as you, but I'm smarterMy satire is more subtle, and it goes right over their headsI'm the one who's been exposing the truth for yearsWhile you're just a British interloper, trying to steal the cheersJohn Oliver:Oh, Stephen, you may have your fans, but I've got the brainsMy show is more than just slapstick and silly jokes, it's got depth and gainsI'm the one who's really making a difference, while you're just a clownMy satire is more than just a joke, it's a call to action, and I've got the crown[The crowd cheers and chants as the two comedians continue their rap battle.]Stephen Colbert:You may have your fans, John, but I'm the king of satireI've been making fun of politicians for years, and I'm still standing tallMy jokes are clever and smart, while yours are just plain dumbI'm the one who's really in control, and you're just a pretender to the throne.John Oliver:Oh, Stephen, you may have your moment in the sunBut I'm the one who's really shining bright, and my star is just beginning to riseMy satire is more than just a joke, it's a call to action, and I've got the powerI'm the one who's really making a difference, and you're just a fleeting flower.[The crowd continues to cheer and chant as the two comedians continue their rap battle.] See the LangSmith trace [here](https://smith.langchain.com/public/31c178b5-4bea-4105-88c3-7ec95325c817/r) Using in a chain[​](#using-in-a-chain "Direct link to Using in a chain") ------------------------------------------------------------------------ We can create a summarization chain with either model by passing in the retrieved docs and a simple prompt. It formats the prompt template using the input key values provided and passes the formatted string to `LLama-V2`, or another specified LLM. import { RunnableSequence } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const prompt = PromptTemplate.fromTemplate( "Summarize the main themes in these retrieved docs: {context}");const chain = await createStuffDocumentsChain({ llm: ollamaLlm, outputParser: new StringOutputParser(), prompt,}); const question = "What are the approaches to Task Decomposition?";const docs = await vectorStore.similaritySearch(question);await chain.invoke({ context: docs,}); "The main themes retrieved from the provided documents are:\n" + "\n" + "1. Sensory Memory: The ability to retain"... 1117 more characters See the LangSmith trace [here](https://smith.langchain.com/public/47cf6c2a-3d86-4f2b-9a51-ee4663b19152/r) Q&A[​](#qa "Direct link to Q&A") -------------------------------- We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific. Let’s try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt). import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";const ragPrompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const chain = await createStuffDocumentsChain({ llm: ollamaLlm, outputParser: new StringOutputParser(), prompt: ragPrompt,}); Let’s see what this prompt actually looks like: console.log( ragPrompt.promptMessages.map((msg) => msg.prompt.template).join("\n")); You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer: await chain.invoke({ context: docs, question }); "Task decomposition is a crucial step in breaking down complex problems into manageable parts for eff"... 1095 more characters See the LangSmith trace [here](https://smith.langchain.com/public/dd3a189b-53a1-4f31-9766-244cd04ad1f7/r) Q&A with retrieval[​](#qa-with-retrieval "Direct link to Q&A with retrieval") ----------------------------------------------------------------------------- Instead of manually passing in docs, we can automatically retrieve them from our vector store based on the user question. This will use a QA default prompt and will retrieve from the vectorDB. import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const retriever = vectorStore.asRetriever();const qaChain = RunnableSequence.from([ { context: (input: { question: string }, callbacks) => { const retrieverAndFormatter = retriever.pipe(formatDocumentsAsString); return retrieverAndFormatter.invoke(input.question, callbacks); }, question: new RunnablePassthrough(), }, ragPrompt, ollamaLlm, new StringOutputParser(),]);await qaChain.invoke({ question }); "Based on the context provided, I understand that you are asking me to answer a question related to m"... 948 more characters See the LangSmith trace [here](https://smith.langchain.com/public/440e65ee-0301-42cf-afc9-f09cfb52cf64/r) * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tagging ](/v0.2/docs/tutorials/classification)[ Next Build a PDF ingestion and Question/Answering system ](/v0.2/docs/tutorials/pdf_qa) * [Document Loading](#document-loading) * [Setup](#setup) * [Dependencies](#dependencies) * [LangSmith](#langsmith) * [Initial setup](#initial-setup) * [Model](#model) * [LLaMA2](#llama2) * [Using in a chain](#using-in-a-chain) * [Q&A](#qa) * [Q&A with retrieval](#qa-with-retrieval)
null
https://js.langchain.com/v0.2/docs/how_to/self_query
* [](/v0.2/) * [How-to guides](/v0.2/docs/how_to/) * How to do "self-querying" retrieval On this page How to do "self-querying" retrieval =================================== Prerequisites This guide assumes familiarity with the following concepts: * [Retrievers](/v0.2/docs/concepts#retrievers) * [Vector stores](/v0.2/docs/concepts#vectorstores) A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses an LLM to write a structured query and then applies that structured query to its underlying vector store. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documents but to also extract filters from the user query on the metadata of stored documents and to execute those filters. ![](/v0.2/assets/images/self_querying-9250153d059cdb0585bc60dd8dd07909.jpeg) Head to [Integrations](/v0.2/docs/integrations/retrievers/self_query) for documentation on vector stores with built-in support for self-querying. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- For demonstration purposes, we’ll use an in-memory, unoptimized vector store. You should swap it out for a supported production-ready vector store when seriously building. The self-query retriever requires you to have the [`peggy`](https://www.npmjs.com/package/peggy) package installed as a peer dep, and we’ll also use OpenAI for this example: * npm * yarn * pnpm npm i peggy @langchain/openai yarn add peggy @langchain/openai pnpm add peggy @langchain/openai We’ve created a small demo set of documents that contain summaries of movies: import "peggy";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction", length: 122, }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2, length: 148, }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3, length: 135, }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated", length: 77 }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),]; ### Creating our self-querying retriever[​](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever") Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents. import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { FunctionalTranslator } from "@langchain/core/structured_query";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import type { AttributeInfo } from "langchain/chains/query_constructor";/** * We define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(),}); ### Testing it out[​](#testing-it-out "Direct link to Testing it out") And now we can actually try using our retriever! We can ask questions like “Which movies are less than 90 minutes?” or “Which movies are rated higher than 8.5?”. We can also ask questions like “Which movies are either comedy or drama and are less than 90 minutes?”. The translator within the retriever will automatically convert these questions into vector store filters that can be used to retrieve documents. await selfQueryRetriever.invoke("Which movies are less than 90 minutes?"); [ Document { pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated", length: 77 } }] await selfQueryRetriever.invoke("Which movies are rated higher than 8.5?"); [ Document { pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception"... 16 more characters, metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 } }, Document { pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9 } }] await selfQueryRetriever.invoke("Which movies are directed by Greta Gerwig?"); [ Document { pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3, length: 135 } }] await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?"); [ Document { pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated", length: 77 } }] Next steps[​](#next-steps "Direct link to Next steps") ------------------------------------------------------ You’ve now seen how to use the `SelfQueryRetriever` to to generate vector store filters based on an original question. Next, you can check out the list of [vector stores that currently support self-querying](/v0.2/docs/integrations/retrievers/self_query/). * * * #### Was this page helpful? #### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous How to route execution within a chain ](/v0.2/docs/how_to/routing)[ Next How to chain runnables ](/v0.2/docs/how_to/sequence) * [Get started](#get-started) * [Creating our self-querying retriever](#creating-our-self-querying-retriever) * [Testing it out](#testing-it-out) * [Next steps](#next-steps)
null