Connecting to an MCP Server from JavaScript using LangChain.js
Introduction
Model Context Protocol (MCP) is a standardized way to expose data and functionality to Large Language Model (LLM) applications in a secure, consistent manner. Initially developed by Anthropic, MCP has gained significant traction in the artificial intelligence (AI) ecosystem, with hundreds of tool servers already published. The protocol allows LLMs to interact with external systems through well-defined interfaces, enabling them to access real-time data, perform calculations, and take actions in response to user queries.
In March 2025, the LangChain community released MCP adapters for LangChain.js, a JavaScript library for building applications powered by LLMs. The adapters convert MCP tools into LangChain.js and LangGraph.js compatible tools, allowing developers to seamlessly incorporate them into their existing agent workflows.
In this post, I'll show you how to connect your JavaScript applications to an MCP server using LangChain.js and LangGraph.js. I'll cover both the STDIO and SSE transports, and provide a simple example to create an agent to interact with a Kubernetes cluster.
Configuring the MCP clients
Before we can use MCP tools with LangChain.js, we need to set up MCP clients that can communicate with MCP servers. The TypeScript SDK for Model Context Protocol provides client implementations for different transport methods. Let's learn how to configure clients for both STDIO and SSE transports.
Setting up an STDIO transport client
The STDIO transport is ideal for command-line tools and direct integrations. It allows communication with an MCP server that's run as a subprocess of your application. This approach is particularly useful for local execution which is convenient for agents that perform tasks on the same machine where the server is running.
The following code snippet demonstrates how to set up an STDIO transport client:
import {Client} from '@modelcontextprotocol/sdk/client/index.js';
import {StdioClientTransport} from '@modelcontextprotocol/sdk/client/stdio.js';
const initStdioClient = async () => {
const stdioClient = new Client({
name: 'blog.marcnuri.com'
});
const transport = new StdioClientTransport({
command: 'npx',
args: ['-y', 'kubernetes-mcp-server@latest']
});
await stdioClient.connect(transport);
return stdioClient;
};
In this example, we're creating a new MCP client with a name identifier and connecting it to a transport that runs the kubernetes-mcp-server
package using npx.
The STDIO transport spawns a child process and communicates with it through standard input and output streams.
These are the key components of the STDIO transport client setup:
- We create a new
Client
instance with an optional name identifier. This name is used to identify the client with the server. - We create a new
StdioClientTransport
instance with the command to execute the MCP server.command
: The command to execute the MCP server.args
: The array of arguments to pass to the command.
- We call the
connect
method on the client instance, passing the transport instance to establish the connection.
Setting up an SSE transport client
The Server-Sent Events (SSE) transport is more suitable for MCP servers running in a remote environment. It establishes a persistent connection with the MCP server over HTTP using SSE for streaming communication.
The following code snippet demonstrates how to set up an SSE transport client:
import {Client} from '@modelcontextprotocol/sdk/client/index.js';
import {SSEClientTransport} from '@modelcontextprotocol/sdk/client/sse.js';
const initSseClient = async () => {
const sseClient = new Client({
name: 'blog.marcnuri.com'
});
const transport = new SSEClientTransport('https://localhost:8080/sse');
await sseClient.connect(transport);
return sseClient;
};
In this example, we're creating a new MCP client and connecting it to an SSE transport that points to a local server on port 8080. This assumes that you have an MCP server running at that URL with an SSE endpoint. When the client connects, it establishes a long-lived connection to the server using the Server-Sent Events protocol, allowing the server to push messages to the client in real-time.
These are the key components of the SSE transport client setup:
- We create a new
Client
instance with an optional name identifier. This name is used to identify the client with the server. - We create a new
SSEClientTransport
instance with the URL of the MCP server. - We call the
connect
method on the client instance, passing the transport instance to establish the connection.
Setting up LangChain.js MCP adapters
Now that we have our MCP clients set up, we can use the LangChain.js MCP adapters to integrate MCP tools with LangChain.js.
The @langchain/mcp-adapters
package provides a simple way to load MCP tools and use them with LangChain agents.
MCP adapters provides a simple loadMcpTools
function that wraps the MCP tools and makes them compatible with LangChain.js.
The following code snippet demonstrates how to load the MCP tools:
import {loadMcpTools} from '@langchain/mcp-adapters';
// Assuming you've already set up an MCP client as shown above
const stdioClient = await initStdioClient();
const tools = await loadMcpTools('kubernetes-mcp-server', stdioClient);
The loadMcpTools
function takes two arguments:
- A name for the tools (this can be any string you choose).
- The MCP client instance.
It returns an array of LangChain-compatible tools you can use with LangGraph agents. Behind the scenes, this function discovers the available tools from the MCP server and creates corresponding LangChain tool objects for each.
The adapter handles all the protocol-specific details, allowing you to focus on building your application logic.
It also provides features like:
- 🔌 Multiple transport options with reconnection strategies.
- 🔄 Connecting to multiple MCP servers simultaneously
- 🧩 Seamless integration with LangChain agents
- 🛠️ Graceful error handling when servers aren't available
Using LangGraph.js to create an agent
We now have all the pieces together to create an agent using LangGraph.js that can leverage the MCP tools. LangGraph.js provides a simple way to create complex agent workflows, and the prebuilt agents make it even easier to get started.
Here's a complete example that shows how to create an agent that can interact with a Kubernetes cluster using the kubernetes-mcp-server:
import {createReactAgent} from '@langchain/langgraph/prebuilt';
import {ChatOpenAI} from '@langchain/openai';
import {loadMcpTools} from '@langchain/mcp-adapters';
const assistant = async () => {
const model = new ChatOpenAI({
configuration: {
apiKey: process.env['GITHUB_TOKEN'],
baseURL: 'https://models.inference.ai.azure.com'
},
model: 'gpt-4o-mini'
});
const stdioClient = await initStdioClient();
const tools = await loadMcpTools('kubernetes-mcp-server', stdioClient);
const agent = createReactAgent({
llm: model,
tools
});
const listPods = await agent.invoke({
messages:[{
role: 'user',
content: 'List all pods in my cluster and output as markdown table'
}]
});
console.log(listPods.messages.slice(-1)[0].content);
await stdioClient.close();
};
assistant()
.then(() => {
console.log('done');
})
.catch(err => {
console.error('Error:', err);
});
In this example, we create a very simple assistant that connects the kubernetes-mcp-server
tool with the gpt-4o-mini
language model provided by GitHub.
As you can see, the LangChain ecosystem makes it straightforward to create agents that can interact with MCP servers.
These are the key components of the agent example:
ChatOpenAI
:
Creating a new ChatOpenAI instance configured to use thegpt-4o-mini
model from Azure AI/GitHub Marketplace.configuration
: The configuration object for the OpenAI API, including the API key and base URL.model
: The name of the language model to use.
loadMcpTools
:
Loads the MCP tools from the STDIO client as described in the previous section.createReactAgent
:
Creates a LangGraph.js ReAct (Reasoning and Acting) agent using the models and tools we loaded.invoke
:
Invokes the agent with a user request to list all pods in the Kubernetes cluster. The agent will use the tools to perform the action and return the result.listPods.messages
:
The agent's response is stored in thelistPods
variable. We print the last message from the response, which contains the action result.
The agent follows a ReAct (Reasoning and Acting) pattern, alternating between reasoning about what to do next and taking action using the available tools. In this case, the agent can use the tools provided by the Kubernetes MCP server to interact with your Kubernetes cluster.
The output of the example will be similar to this:
Here is the list of pods in your cluster presented as a markdown table:
```markdown
| Name | Namespace | Status | Container Image | Pod IP |
|--------------------------|-----------|---------|-----------------------|-------------|
| kubernetes-mcp-run-nxvvq | default | Running | marcnuri/chuck-norris | 10.133.7.0 |
| kubernetes-mcp-run-rhqzb | default | Running | marcnuri/chuck-norris | 10.133.7.0 |
```
You can copy and paste this markdown into any markdown viewer to see it formatted correctly.
However, the listPods.messages
variable will contain the complete list of exchanged messages between the agent and the LLM.
In this particular case, the invocation involved two messages.
The beauty of this approach is that the agent can decide which tools to use based on the user's request. For example, if the user asks to list pods, the agent will identify that it needs to use the appropriate Kubernetes tool to fetch pod information, and then format the result as a Markdown table.
Conclusion
In this post, I've shown you how to connect your JavaScript applications to an MCP server using LangChain.js and LangGraph.js. I've covered how to set up both STDIO and SSE transport clients, how to use the MCP adapters to load tools from the server, and how to bind everything together using LangGraph.js to create an agent. The combination of MCP, LangChain.js, and LangGraph.js provides a powerful platform for building LLM-powered applications that interact with various tools and resources. With the MCP adapters, you can seamlessly integrate the growing ecosystem of MCP tool servers into your LangChain and LangGraph agents.
You can find the source code for this post on GitHub.