Local Deployment of Large Language Models with Ollama

Summary
Want to run LLMs locally on Windows? This guide walks you through the complete Ollama workflow—from installation and path configuration to model selection—getting your private AI up and running in minutes.
flowchart LR A["Install\nOllama"] --> B["Configure\nStorage Path"] B --> C["Pull\nModel"] C --> D["Create\nModelfile"] D --> E["Run\nInference"]

Environment Setup

Install Ollama

  • Visit Ollama Official Website to download the installer for Windows
  • Proceed with default installation settings, which will automatically configure environment variables

Verify Installation

1
ollama --version

Configure Model Storage Path

Default Path

C:\Users\<Username>\.ollama\models

Custom Storage Location

  1. Open System Properties → Advanced → Environment Variables
  2. Create a new system variable:
    • Name : OLLAMA_MODELS
    • Value : D:\OllamaModels\ (custom path)
  3. Restart Ollama service

Command Cheat Sheet

CommandDescription
ollama pull <model>Download model
ollama run <model>Start inference
ollama listInstalled models
ollama rm <model>Remove model
ollama run <model>Start model instance
ollama stop <model>Terminate model instance

Model Deployment

Pull a Model

When selecting a model to download, consider your hardware limitations. The model’s size must fit within your available GPU VRAM and system memory. For optimal performance, choose a model that aligns with your system’s capabilities.

ollama pull deepseek-r1:1.5b

Create Model Configuration (Optional)

Create a new Modelfile to customize model parameters, then apply the configuration using:

ollama create my-deepseek -f Modelfile

1
2
FROM deepseek-r1
PARAMS temperature=0.7

Available models

https://ollama.com/search

Execution

Start the Model

1
ollama run deepseek-r1:1.5b

Test the model