Introducing Goose, the on-machine AI agent
Introduction
In January 2025, Block introduced Goose, an open-source extensible AI agent distributed as a command-line interface (CLI) and Desktop application. Goose runs locally, can be connected to different large language model LLM providers, and is extensible through Model Context Protocol (MCP) servers.

In this post, I'll guide you through setting up Goose CLI in Linux, connecting it to Google Gemini, and extending its capabilities using an MCP server.
Setting up Goose CLI
The Goose Command Line Interface (CLI) is a lightweight tool that allows you to interact with Goose directly from your terminal. I prefer using the CLI because it provides a simple and efficient way to interact with the AI model without needing a graphical user interface.
First, download and install the Goose CLI. The getting started guide provides installation instructions for various operating systems.
For Linux, install the latest version of Goose using:
curl -fsSL https://github.com/block/goose/releases/download/stable/download_cli.sh | bash
This script fetches and installs the latest Goose version for your distribution.
Important
Consider verifying the script's source and checksum before execution for security best practices.
Alternatively, visit the GitHub releases page to download the appropriate binary for your operating system.
Configuring the LLM Provider (Google Gemini)
Next, configure Goose to use an LLM provider. In this case, I'll use Gemini API, Google's LLM provider. I chose this option because it offers a free tier with generous rate limits, allowing for experimentation without incurring costs.
This is the fastest way to get started with Goose. However, you may need to switch provider or upgrade to a paid plan if you reach the rate limits.
To connect Goose to Gemini, you need to create a Google account and obtain an API key. You can generate this key in the Google AI Studio:

Press on the "Create API Key" button to generate a new API key.

We will use this API key to configure Goose to connect to Google Gemini by setting the Goose provider. You can do this by running the following command:
goose configure
The goose configure
command will guide you through setting up the provider, including entering your API key and selecting the default model.

If the configuration is successful, you'll see a confirmation message.
If you encounter issues, check your API key, internet connection, or the logs in your ~/.config/goose
directory for more information.
Now that Google Gemini is configured as our LLM provider, let's explore Goose’s capabilities by starting a new chat session.
Running Goose for the first time
Start a new session by running the following command:
goose session
This command launches a terminal-based chat session where you can interact with the LLM.

In the image above, you can see the Goose session running in the terminal. In this case, I asked the model to list its capabilities, and it responded with a list of supported commands.
Setting up Puppeteer MCP server
To demonstrate Goose's extensibility, let's set up the reference Puppeteer MCP server.
You can do this through the command line or by editing your goose/config.yaml
file.
For this example, we will use the command line:
goose configure
The CLI will prompt:
This will update your existing config file
if you prefer, you can edit it directly at /home/user/.config/goose/config.yaml
┌ goose-configure
│
◆ What would you like to configure?
│ ○ Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ● Add Extension
└
Select Add Extension
, choose Command-line Extension
, and enter the following details for the Puppeteer MCP server:
┌ goose-configure
│
◇ What would you like to configure?
│ Add Extension
│
◇ What type of extension would you like to add?
│ Command-line Extension
│
◇ What would you like to call this extension?
│ puppeteer
│
◇ What command should be run?
│ npx -y @modelcontextprotocol/server-puppeteer
│
◇ Would you like to add environment variables?
│ No
│
└ Added puppeteer extension
Once you have added the Puppeteer MCP server extension, you can start a new session and interact with it through Goose.
Note
To run this extension, you need to have Node.js installed on your system.
You can alternatively use the Docker command instead: docker run -i --rm --init -e DOCKER_CONTAINER=true mcp/puppeteer
.
Using Goose to open a browser and navigate to a website
To demonstrate Goose's capabilities with the Puppeteer MCP server, we will use it to open a browser and navigate to a website. You can do this by running the following command in the Goose session:
goose session
Once the session is running, you can interact with the Puppeteer MCP server by typing the following command:
Open a new browser window and navigate to https://blog.marcnuri.com, evaluate the links to find the author's LinkedIn, Bluesky, GitHub, and Twitter URLs'
Goose will use the Puppeteer MCP server to open a new browser window and navigate to the specified website. It will then perform a script evaluation to find the author's LinkedIn, Bluesky, GitHub, and Twitter URLs.

In the previous image, you can see the Goose session with the Puppeteer MCP server running in the terminal.
Just after I performed the prompt, Goose followed a sequence of steps until it returned the list of links:
- Send the list of commands and user's prompt to the LLM server.
- Evaluate the LLMs response to perform the following actions with Puppeteer functions:
- Open a new browser window.
- Navigate to the specified website.
- Evaluate the website's content using the LLM provided JavaScript to find the author's social media links.
- Send the MCP server function result to the LLM server.
- Show the result to the user.
In this case, it interacted with a single MCP server, but you can add multiple servers to further extend Goose's capabilities.
Using an alternative LLM provider (Groq)
One of my favorite features of Goose is its ability to connect to different LLM providers. So far we've used Google Gemini because it can be accessed for free, but you may want to switch to a different provider for various reasons.
Let's try connecting to Groq, a powerful LLM provider that offers a wide range of models and capabilities and is known for its speed and efficiency. Groq's unique Language Processing Unit (LPU) architecture delivers high throughput while consuming minimal power, making it ideal for real-time AI tasks.
Groq offers a free tier too, but its token-per-minute rate limit might be too restrictive for some use cases. This is especially true for MCP servers that require continuous interaction with the LLM provider.
To connect Goose to Groq, you need to create an account on the Groq website and obtain an API key. You can generate this key in the Groq cloud console:

We will use this API key to configure Goose to connect to Groq by setting the Goose provider. You can do this by running the following command:
goose configure
The goose configure
command will guide you through setting up the provider, including entering your API key and selecting the default model.

With just a simple command, you can switch between different LLM providers and explore their capabilities with Goose. It's now very easy to use a variety of AI models and services leveraging their specific strengths without having to switch between different tools or interfaces.
Conclusion
In this post, I introduced Goose, an open-source AI client that runs locally, connects to LLM providers like Google Gemini or Groq, and extends its capabilities via MCP servers. I demonstrated how to set up Goose, configure Google Gemini, and use Puppeteer MCP for browser automation.
Despite its potential, Goose and MCP are still in their early days. Many MCP servers remain experimental, with limited capabilities. For instance, the Puppeteer server supports only basic automation tasks like navigation, screenshots, and basic element interaction.
As the community continues to expand, I expect Goose and MCP to evolve, redefining how developers automate AI-powered workflows. Stay tuned for future integrations and community-driven extensions!