Local Deployment of Large Language Models with Ollama

Summary
This post provides a comprehensive guide on deploying large language models locally using Ollama on Windows. It covers the entire process from environment setup, including installation and configuration of model storage paths, to model deployment and execution. Readers will learn how to pull models, customize configurations, and start inference with Ollama commands. The guide emphasizes the importance of selecting models that fit within hardware limitations for optimal performance. By following this step-by-step approach, users can effectively run and test large language models on their local machines.

1. Environment Setup

1.1 Install Ollama

  • Visit Ollama Official Website to download the installer for Windows
  • Proceed with default installation settings, which will automatically configure environment variables

1.2 Verify Installation

1
ollama --version

1.3 Configure Model Storage Path

Default Path

C:\Users\<Username>\.ollama\models

Custom Storage Location

  1. Open System Properties → Advanced → Environment Variables
  2. Create a new system variable:
    • Name : OLLAMA_MODELS
    • Value : D:\OllamaModels\ (custom path)
  3. Restart Ollama service

1.4 Command Cheat Sheet

CommandDescription
ollama pull <model>Download model
ollama run <model>Start inference
ollama listInstalled models
ollama rm <model>Remove model
ollama run <model>Start model instance
ollama stop <model>Terminate model instance

2. Model Deployment

2.1 Pull a Model

When selecting a model to download, consider your hardware limitations. The model’s size must fit within your available GPU VRAM and system memory. For optimal performance, choose a model that aligns with your system’s capabilities.

ollama pull deepseek-r1:1.5b

2.2 Create Model Configuration (Optional)

Create a new Modelfile to customize model parameters, then apply the configuration using:

ollama create my-deepseek -f Modelfile

1
2
FROM deepseek-r1
PARAMS temperature=0.7

2.3 Available models

https://ollama.com/search

3. Execution

3.1 Start the Model

1
ollama run deepseek-r1:1.5b

3.2 Test the model