#atom

Creating a fully local and private AI assistant in Obsidian

Core Idea: A fully private Obsidian AI assistant can be created by combining the Obsidian Copilot plugin with locally-run AI models and embedding services, eliminating the need to share personal data with cloud services.

Key Elements

Components Required

  1. Obsidian Copilot Plugin as the interface
  2. Ollama (Open Large Language Model Application) for running models locally
  3. Open-source LLM (like DeepSeek R1) for text generation
  4. Open-source embedding model (like BGE M3) for note similarity

Setup Process

  1. Install Obsidian Copilot plugin through community plugins
  2. Download and install Ollama software (compatible with Windows, Mac, Linux)
  3. Download appropriate AI models through Ollama
    • DeepSeek R1 for text generation (size based on available RAM)
    • BGE M3 for embeddings (approximately 1GB)
  4. Configure context window for local model via Ollama

ollama params set context_window 128k <model_name>

  1. Start Ollama server with ollama serve
  2. Configure Obsidian Copilot to use local models

Performance Considerations

Privacy Benefits

Additional Connections

References

  1. Tony's demonstration video transcript
  2. Ollama and DeepSeek documentation

#privacy #local-ai #obsidian #knowledge-management #self-hosted

Sources: