Creating a fully local and private AI assistant in Obsidian
Core Idea: A fully private Obsidian AI assistant can be created by combining the Obsidian Copilot plugin with locally-run AI models and embedding services, eliminating the need to share personal data with cloud services.
Key Elements
Components Required
- Obsidian Copilot Plugin as the interface
- Ollama (Open Large Language Model Application) for running models locally
- Open-source LLM (like DeepSeek R1) for text generation
- Open-source embedding model (like BGE M3) for note similarity
Setup Process
- Install Obsidian Copilot plugin through community plugins
- Download and install Ollama software (compatible with Windows, Mac, Linux)
- Download appropriate AI models through Ollama
- DeepSeek R1 for text generation (size based on available RAM)
- BGE M3 for embeddings (approximately 1GB)
- Configure context window for local model via Ollama
ollama params set context_window 128k <model_name>
- Start Ollama server with
ollama serve
- Configure Obsidian Copilot to use local models
- Add custom models under settings
- Set provider as "Ollama"
- Verify connections
- Reindex vault with new embedding model
Performance Considerations
- Choose model size based on available RAM (7B/8B/14B for 16GB machines)
- Expanded context window (128k) allows processing more notes but may impact speed
- Local models may process requests more slowly than cloud services
Privacy Benefits
- Notes and queries never leave your personal device
- Complete control over data storage and processing
- No API keys or subscriptions required
Additional Connections
- Broader Context: Privacy-Focused Knowledge Management (philosophical approach)
- Applications: Local LLM Implementation (technical implementation)
- See Also: AI Embedding Models (complementary technology), Ollama (model management tool)
References
- Tony's demonstration video transcript
- Ollama and DeepSeek documentation
#privacy #local-ai #obsidian #knowledge-management #self-hosted
Sources: