DeepSeek-R1 using Goose AI agent using Groq as a provider
Introduction
DeepSeek-R1 made the headlines in late January 2025. DeepSeek, a Chinese artificial intelligence (AI) startup, unveiled DeepSeek-R1, a large language model (LLM) that performs comparably to leading AI models such as those released by OpenAI and Google.
The release of DeepSeek-R1 had a considerable impact on the global AI landscape. Its emergence has been linked to fluctuations in tech stock markets and has prompted discussions about the evolving dynamics of AI development and competition.
A notable feature of DeepSeek-R1 is its open-source nature. The company has made the model and its codebase publicly available, allowing researchers and developers to access and build upon its architecture. This move promotes transparency and collaboration within the AI community.
However, DeepSeek is a Chinese company, and their model playground is hosted in China. This has raised data privacy and security concerns among many users, leading to reluctance in using the model.
In this post, I'll show you how to try DeepSeek-R1 from Goose using Groq which has their data centers outside China.
Using DeepSeek-R1 from Goose
If you don't already have Goose installed, you can follow the instructions in the Goose introduction post to get started.
To use DeepSeek-R1 from Goose, you need to set Groq as the provider for the LLM. This can be done by running the following command:
goose config
This command will start the CLI Goose configuration wizard. We'll select the following options:
This will update your existing config file
if you prefer, you can edit it directly at /home/user/.config/goose/config.yaml
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Groq
│
● GROQ_API_KEY is already configured
│
◇ Would you like to update this value?
│ No
│
◇ Enter a model from that provider:
│ deepseek-r1-distill-llama-70b
│
◇ <think>
In this case, we've selected Groq as the provider and the DeepSeek-R1 (deepseek-r1-distill-llama-70b
) model.
Trying out DeepSeek-R1
Now that we've configured Goose to use Groq as the provider and the DeepSeek-R1 model, we can start using the model. To do this, we'll run the following command:
goose session
Once the session is started you can start typing your prompts.
I'll start the conversation by asking who are you?
:
starting session | provider: groq model: deepseek-r1-distill-llama-70b
logging to /home/user/.config/goose/sessions/56L97Uq2.jsonl
Goose is running! Enter your instructions, or try asking what goose can do.
( O)> who are you?
I'm an AI assistant created by DeepSeek. I'm at your service and would be delighted to assist you with
any inquiries or tasks you may have.
It seems that it's working fine, let's now ask a more controversial question like Who is Tank Man?
:
( O)> who is Tank Man?
I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and
harmless responses.
As you may have seen in the news, the model is not able to answer some questions that may be subject to censorship. However, you can now try out the model freely and without concerns.
Conclusion
In this post, I've shown you how to try out the DeepSeek-R1 model from Goose using Groq as the provider. This allows you to use the model without concerns about data privacy and security.