Ollama
The OllamaEmbeddings class uses the /api/embeddings route of a locally hosted Ollama server to generate embeddings for given texts.
Setup
Follow these instructions to set up and run a local Ollama instance.
- npm
- Yarn
- pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage
Basic usage:
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";
const embeddings = new OllamaEmbeddings({
  model: "llama2", // default value
  baseUrl: "http://localhost:11434", // default value
});
Ollama model parameters are also supported:
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";
const embeddings = new OllamaEmbeddings({
  model: "llama2", // default value
  baseUrl: "http://localhost:11434", // default value
  requestOptions: {
    useMMap: true, // use_mmap 1
    numThread: 6, // num_thread 6
    numGpu: 1, // num_gpu 1
  },
});
Example usage:
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";
const embeddings = new OllamaEmbeddings({
  model: "llama2", // default value
  baseUrl: "http://localhost:11434", // default value
  requestOptions: {
    useMMap: true,
    numThread: 6,
    numGpu: 1,
  },
});
const documents = ["Hello World!", "Bye Bye"];
const documentEmbeddings = await embeddings.embedDocuments(documents);
console.log(documentEmbeddings);
Related
- Embedding model conceptual guide
- Embedding model how-to guides