Member-only story

Running Mistral LLM Locally with Ollama

krishankant singhal
3 min readFeb 17, 2024

--

In this guide, we’ll walk you through the process of downloading Ollama, installing Mistral, and using the Ollama model through LangChain. Ollama is a powerful language model developed by LangChain, and Mistral is a specific model variant that you can use with Ollama. By following these steps, you’ll be able to leverage the capabilities of Ollama locally on your machine.

Downloading and Installing Ollama:

  • To run Mistral locally, download Ollama from the official website.
  • Once downloaded, move Ollama to your /Applications directory (for macOS).
  • You can now run Ollama from the terminal.

Running Mistral with Ollama:

  • Open your terminal and execute the following commands:
  • For the default Instruct model:
  • ollama run mistral
  • For the text completion model:
  • ollama run mistral:text
  • Note that you’ll need at least 8GB of RAM to run Mistral with Ollama.

Using Langchain,Ollama and mistral

pip install langchain ,langchain-community,langchain_mistralai
from langchain_community.document_loaders import TextLoader
from langchain_community.llms import Ollama
from langchain_mistralai.chat_models import ChatMistralAI
from langchain_mistralai.embeddings import MistralAIEmbeddings
from…

--

--

krishankant singhal
krishankant singhal

Written by krishankant singhal

Angular,Vuejs,Android,Java,Git developer. i am nerd who want to learn new technologies, goes in depth.

Responses (1)