LLM Providers
Local vs Cloud LLMs
Choosing between local and cloud LLMs is a tradeoff. Cloud models are easier to start with and can provide strong quality. Local models give you more control and can keep prompts and notebook context on your own machine.
Comparison table
| Factor | Local LLMs with Ollama | Cloud LLMs with OpenAI | Ollama Cloud |
|---|---|---|---|
| Privacy | Best when model runs on your machine | Depends on cloud provider policy and your configuration | Depends on endpoint owner and network setup |
| Setup effort | Install Ollama and download models | Add API key and model name | Add Ollama API key and model name |
| Hardware | Uses your CPU, RAM, and GPU if available | No local model hardware required | Uses remote infrastructure |
| Speed | Depends on local hardware and model size | Usually consistent, depends on API and network | Depends on cloud endpoint and network latency |
| Cost | No per-token API cost, but uses local compute | Provider API usage cost | Ollama Cloud usage cost |
| Best use case | Private local notebooks and sensitive data | High-quality cloud AI assistance | Large remote models without local hardware limits |
Recommendations
- Use MLJAR AI if you want no setup and are in trial or have the MLJAR AI subscription add-on.
- Use OpenAI if your team already uses OpenAI and can provide an API key and model name.
- Use Ollama local if data privacy and local execution are the priority and your model is running locally.
- Use Ollama Cloud if local hardware is limiting and you have an Ollama API key and model name.
Related pages
For setup instructions, read OpenAI Integration or Ollama Local Setup.