DeepSeek is the latest buzzword within the world of AI. DeepSeek is a Chinese AI startup, founded in May 2023, that functions as an independent AI research lab and has gained significant attention around the globe for developing very powerful large language models (LLMs) at a cost for which its US counterparts cannot compete.
Also: What is sparsity? DeepSeek AI's secret, revealed by Apple researchers
One reason for this lower cost is that DeepSeek is open-source. The company has also claimed it has created a way to develop LLMs at a much lower cost than US AI companies. DeepSeek models also perform as well (if not better) than other models, and the company has released different models for different purposes (such as programming, general-purpose, and vision).
My experience with DeepSeek has been interesting so far. What I've found is that DeepSeek always seems to be having a conversation with itself, in the process of relaying information to the user. The responses tend to be long-winded and can send me down several different rabbit holes, each of which led to me learning something new.
I do love learning new things.
Also: How I feed my files to a local AI for better, more relevant responses
If you're interested in DeepSeek, you don't have to rely on a third party to use it. That's right -- you can install DeepSeek locally and use it at your whim.
There are two easy ways to make this happen, and I'm going to show you both.
What you'll need: For this, you'll need both Ollama and Msty installed -- and that's it. You can use this on Linux, MacOS, or Windows, and it won't cost you a penny.
The first step is to open the Msty GUI. How you do this will depend on the OS you use.
From the left sidebar, click the icon that looks like a computer monitor with a lightning bolt, which will open the Local AI Models section.
Make sure Msty is updated by clicking the cloud icon.
In the Local AI Models section, you'll see DeepSeek R1. Click the download button (downward pointing arrow) to add the DeepSeek model to Msty. Once the download completes, close the Local AI Models window.
Make sure to select DeepSeek R1.
Back at the main window, click the model selection drop-down, click DeepSeek R1 (under Local AL), and type your query.
You can install as many local models as you need.
Another option is to do a full install of DeepSeek on Linux. Before you do this, know that the system requirements for this are pretty steep. You'll need a minimum of:
If your system meets those requirements, and you already have Ollama installed, you can run the DeepSeek R1 model with:
ollama run deepseek-r1:8b
If you haven't already installed Ollama, you can do that with a single command:
curl -fsSL https://ollama.com/install.sh | sh| sh
Also: I tried Sanctum's local AI app, and it's exactly what I needed to keep my data private
You'll be prompted for your user password.
There are other versions of DeepSeek you can run, which are:
Once the command completes, you'll find yourself at the Ollama prompt, where you can start using the model of your choice.
Also: These nations are banning DeepSeek AI - here's why
Either way you go, you now have access to the DeepSeek AI and can use it while keeping all of your queries and information safe on your local machine.