Local Deployment of Large Language Models with Ollama

Summary
This post provides a comprehensive guide on deploying large language models locally using Ollama on Windows. It covers the entire process from environment setup, including installation and configuration of model storage paths, to model deployment and execution. Readers will learn how to pull models, customize configurations, and start inference with Ollama commands. The guide emphasizes the importance of selecting models that fit within hardware limitations for optimal performance. By following this step-by-step approach, users can effectively run and test large language models on their local machines.
Install
Ollama
Configure
Storage Path
Pull
Model
Create
Modelfile
Run
Inference

Environment Setup

Install Ollama

  • Visit Ollama Official Website to download the installer for Windows
  • Proceed with default installation settings, which will automatically configure environment variables

Verify Installation

1
ollama --version

Configure Model Storage Path

Default Path

C:\Users\<Username>\.ollama\models

Custom Storage Location

  1. Open System Properties → Advanced → Environment Variables
  2. Create a new system variable:
    • Name : OLLAMA_MODELS
    • Value : D:\OllamaModels\ (custom path)
  3. Restart Ollama service

Command Cheat Sheet

CommandDescription
ollama pull <model>Download model
ollama run <model>Start inference
ollama listInstalled models
ollama rm <model>Remove model
ollama run <model>Start model instance
ollama stop <model>Terminate model instance

Model Deployment

Pull a Model

When selecting a model to download, consider your hardware limitations. The model’s size must fit within your available GPU VRAM and system memory. For optimal performance, choose a model that aligns with your system’s capabilities.

ollama pull deepseek-r1:1.5b

Create Model Configuration (Optional)

Create a new Modelfile to customize model parameters, then apply the configuration using:

ollama create my-deepseek -f Modelfile

1
2
FROM deepseek-r1
PARAMS temperature=0.7

Available models

https://ollama.com/search

Execution

Start the Model

1
ollama run deepseek-r1:1.5b

Test the model