Connecting to an MCP Server from JavaScript using AI SDK
Introduction
Model Context Protocol (MCP) provides a standardized way for artificial intelligence (AI) models to interact with external tools and systems. Check my introduction to MCP if you're new to this revolutionary protocol to learn more about it.
By connecting to an MCP server, your JavaScript applications can leverage Large Language Models (LLMs) with the added ability to execute commands, retrieve data, and interact with various systems. Vercel AI SDK has a growing list of supported model providers that can connect to an MCP server. Making it an excellent alternative to the LangChain.js library.
In this post, I'll show you how to connect your JavaScript applications to an MCP server using Vercel AI SDK.
I'll describe how to set up the MCP client, configure the AI SDK provider, and prompt the model using the generateText
function.
By the end of this post, you'll be able to integrate your JavaScript applications with an MCP server and leverage the power of LLMs to interact with the real world.
Creating the MCP clients
To interact with an MCP server, we first need to create an MCP client.
AI SDK provides a convenient function called experimental_createMCPClient
that allows us to establish this connection.
Let's learn how to create two different types of MCP clients: one using the STDIO transport and another using the SSE transport.
Setting up an STDIO transport client
The STDIO transport method allows us to spawn a local process and communicate with it through standard input and output streams. This is useful for running local MCP servers or processes that expose the local system's capabilities.
The following code snippet demonstrates how to set up an STDIO transport client for the Kubernetes MCP server:
import {
experimental_createMCPClient as createMcpClient
} from 'ai';
import {
Experimental_StdioMCPTransport as StdioClientTransport
} from 'ai/mcp-stdio';
const initStdioClient = async () => {
const transport = new StdioClientTransport({
command: 'npx',
args: ['-y', 'kubernetes-mcp-server@latest']
});
return createMcpClient({name: 'blog.marcnuri.com', transport});
};
In this code, we're creating an MCP client instance that runs the Kubernetes MCP server using the npx
command.
These are the key steps to set up the STDIO transport client:
- Import the necessary modules from the
ai
andai/mcp-stdio
packages. - Create a new instance of
StdioClientTransport
, passing the command and arguments to run the MCP server. - Call
createMcpClient
with the transport instance to create the MCP client and establish the connection.
Setting up an SSE transport client
The Server-Sent Events (SSE) transport method allows us to connect to a remote MCP server over HTTP. This is useful when you have an MCP server already running as a service that you want to connect to.
The following code snippet demonstrates how to set up an SSE transport client:
import {
experimental_createMCPClient as createMcpClient
} from 'ai';
const initSseClient = async () => {
return createMcpClient({
name: 'blog.marcnuri.com',
transport: {
type: 'sse',
url: `http://localhost:8080/sse`
}
});
};
In this fragment, we're creating an MCP client that connects to a local server running on port 8080 using the SSE protocol. This approach is more suitable for production environments or when you have a dedicated MCP server running.
Note
In production environments, your URL should point to the actual address of your MCP server (which should also be HTTPS-enabled).
These are the key steps to set up the SSE transport client:
- Import the necessary module from the
ai
package. - Call
createMcpClient
with the transport configuration, specifying the type assse
and providing the URL of the MCP server. - The function returns an MCP client instance that can be used to interact with the remote server.
Notice how the SSE-based transport is much simpler to set up than the STDIO-based transport.
Setting up the AI SDK provider and model
Once we have the MCP client set up, we need to configure the AI SDK provider and model that will leverage the MCP server's tools. The AI SDK has a growing list of supported providers, in this case, we will use the Google Generative AI provider.
The following code snippet demonstrates how to set up the Google Generative AI provider with the Gemini model:
import {createGoogleGenerativeAI} from '@ai-sdk/google';
const google = createGoogleGenerativeAI({
apiKey: process.env['GOOGLE_API_KEY']
});
const model = google('gemini-2.0-flash');
In this code, we're creating an instance of the Google Generative AI provider and specifying the model we want to use:
createGoogleGenerativeAI
is a function that initializes the provider with the necessary API key.- The
google
function is then called with the model name to create a model instance that can be used for generating text.
Note
The API key should be set in your environment variables for security reasons, rather than hardcoded in your application.
In this example, we're using the GOOGLE_API_KEY
environment variable to store the API key.
Prompting the model with generateText
Now that we've set up all the necessary components, we can put the pieces together to prompt the AI model and let it interact with the MCP server's tools.
In this example, we are going to generate text by asking the model to list all Kubernetes pods in a Markdown table format.
The following code snippet shows the relevant parts of the pipeline:
import {
generateText,
} from 'ai';
import {createGoogleGenerativeAI} from '@ai-sdk/google';
const assistant = async () => {
console.log('Starting kubernetes-mcp-server in STDIO mode');
const stdioClient = await initStdioClient();
const google = createGoogleGenerativeAI({
apiKey: process.env['GOOGLE_API_KEY']
});
const model = google('gemini-2.0-flash');
const listPods = await generateText({
model,
tools,
maxSteps: 10,
messages: [{
role: 'user',
content: 'List all pods in my cluster and output as Markdown table'
}]
});
console.log(listPods.text);
await stdioClient.close();
};
assistant()
.then(() => {
console.log('done');
})
.catch(err => {
console.error('Error:', err);
});
In this code, we create an assistant that connects the gemini-2.0-flash
model to the Kubernetes MCP server tool and perform a simple prompt.
These are the key components of this example:
- We first initialize the STDIO client and the Google Generative AI provider.
- We then call the
generateText
function with the following parameters:model
: The model instance we created earlier.tools
: The tools available from the MCP server.maxSteps
: The maximum number of steps the model can take to generate the response.
Setting the value to more than 1 ensures the model can perform the initial prompt and a subsequent prompt with the tools execution result.messages
: An array of messages that represent the conversation history. In this case, we provide a single user message asking the model to list all pods in the cluster and format the output as a Markdown table.
- Finally, we log the generated text to the console.
The output of this invocation will be similar to:
| Name | Namespace |
|---|---|
| vibe-code-game-65c6fdd6d7-lp47m | default |
| yakd-dashboard-66cf44d6db-qv4gz | yakd-dashboard |
Nonetheless, the listPods
variable will contain the entire conversation history, including the model's reasoning and the tool execution result.
Conclusion
In this post, I showed you how to connect to an MCP server from JavaScript using the Vercel AI SDK.
We learned how to set up both STDIO and SSE transport clients, configure the AI SDK provider and model from Google, and prompt the model using the generateText
function to generate useful output.
This modular architecture not only illustrates the flexibility of the SDK but also shows how simple code fragments can be combined to build complex, real-world applications. By following these steps, you can easily integrate your JavaScript applications with an MCP server and leverage the power of LLMs to interact with external tools and systems.
As the MCP ecosystem matures, JavaScript developers are well-positioned to leverage the growing catalog of standardized AI tools.
You can find the source code for this post on GitHub.