LLM Providers

Local vs Cloud LLMs

Choosing between local and cloud LLMs is a tradeoff. Cloud models are easier to start with and can provide strong quality. Local models give you more control and can keep prompts and notebook context on your own machine.

Comparison table

FactorLocal LLMs with OllamaCloud LLMs with OpenAIOllama Cloud
PrivacyBest when model runs on your machineDepends on cloud provider policy and your configurationDepends on endpoint owner and network setup
Setup effortInstall Ollama and download modelsAdd API key and model nameAdd Ollama API key and model name
HardwareUses your CPU, RAM, and GPU if availableNo local model hardware requiredUses remote infrastructure
SpeedDepends on local hardware and model sizeUsually consistent, depends on API and networkDepends on cloud endpoint and network latency
CostNo per-token API cost, but uses local computeProvider API usage costOllama Cloud usage cost
Best use casePrivate local notebooks and sensitive dataHigh-quality cloud AI assistanceLarge remote models without local hardware limits

Recommendations

  • Use MLJAR AI if you want no setup and are in trial or have the MLJAR AI subscription add-on.
  • Use OpenAI if your team already uses OpenAI and can provide an API key and model name.
  • Use Ollama local if data privacy and local execution are the priority and your model is running locally.
  • Use Ollama Cloud if local hardware is limiting and you have an Ollama API key and model name.

Related pages

For setup instructions, read OpenAI Integration or Ollama Local Setup.