Local AI use cases

Examples of what the full Local AI stack—Mac Mini, Ollama, Llama models, Open WebUI, Obsidian, and local RAG—can do for teams once it is installed.

These are not generic AI ideas—they assume you have the full Local AI stack running:

  • Mac Mini with Llama models hosted via Ollama.
  • Open WebUI as the shared browser interface.
  • Obsidian as the local knowledge base with Smart Connections, Text Generator, Copilot, and Canvas.
  • Local RAG indexing PDFs, docs, and project notes.

1. Internal knowledge helpdesk

Staff ask questions in Open WebUI like “What is our refund policy for EU customers?” The system searches local PDFs, policy docs in Obsidian, and past meeting notes, then answers with citations—without any data leaving your network.

Stack: Llama model via Ollama, local RAG service, Open WebUI, Obsidian vault of policies and procedures.

2. Board & leadership briefings

Before a board meeting, you drop the latest reports and meeting notes into Obsidian. Local AI summarises risks, highlights trends, and produces a one-page brief tailored to your leadership team’s language.

Stack: Obsidian + Smart Connections + Text Generator with prompts tuned for executive summaries.

3. Legal & compliance review

A legal team needs AI assistance but cannot send contracts to cloud providers. Local AI runs on a Mac Mini in their office, indexes contracts and policies, and helps check clauses, generate comparison tables, and draft redlines, while all documents stay on-prem.

Stack: Llama reasoning model, Obsidian vault with contract templates, RAG over DOCX/PDF, Open WebUI for interactive review.

4. Sales playbook & proposal generator

Sales teams use Obsidian to store playbooks, case studies, and pricing notes. In Open WebUI they run prompts like “Draft a 2-page proposal for a 50-seat Local AI deployment in legal” and the system pulls relevant examples from the vault to produce a first draft.

Stack: Obsidian + Text Generator templates for proposals, Llama model tuned for business writing, RAG over case studies.

5. Project & research synthesis

For complex projects, teams use Obsidian Canvas to map documents, links, and tasks visually. Local AI suggests missing connections, summarises areas of the canvas, and generates “what we know so far” documents.

Stack: Obsidian Canvas, Smart Connections, Copilot sidebar; Llama reasoning model for synthesis.

6. Meeting capture & task extraction

Meeting notes land in Obsidian. Local AI extracts action items, owners, and deadlines into a structured task list, then posts that list back into your note or sends it to your project tracker.

Stack: Obsidian + Text Generator task templates, Copilot quick commands, optional integration to external PM tools.

7. Dev & ops runbooks

Engineering teams keep runbooks and incident reports in Obsidian. When a new incident happens, Local AI searches past incidents and suggests checklists, commands, and log queries. Debugger / Observer show live logs from the Local AI stack itself.

Stack: Obsidian vault of runbooks, Open WebUI, AI Dev Suite Debugger / Observer, RAG over logs and incident notes.

8. Staff training & onboarding

HR and team leads maintain onboarding guides and FAQs in Obsidian. New hires use Open WebUI or Copilot to ask questions like “How do I request time off?” and receive answers backed by company documentation.

Stack: Obsidian docs, Smart Connections for related policies, Llama chat model, Open WebUI for a friendly interface.

9. Content & marketing production

Marketing stores campaigns, posts, and messaging guidelines in Obsidian. Text Generator templates produce drafts for newsletters, posts, and landing pages aligned with previous campaigns—without sending drafts to external tools.

Stack: Obsidian + Text Generator, Llama writing model, prompt templates tuned for your tone of voice.

10. Research lab & R&D sandbox

A technical team uses the Mac Mini as an R&D box for trying new models and prompts. They keep experiment logs in Obsidian, run different Llama variants through Ollama, and use Open WebUI to compare outputs side by side—all inside a controlled, air‑gapped environment.

Stack: Multiple Llama models via Ollama, Open WebUI workspaces, Obsidian vault for experiment notes, RAG for aggregating results.

Design your own setup

If one or more of these scenarios fits your organisation, we can design a Local AI plan that matches your tools, data, and constraints—then turn it into a concrete proposal after we’ve mapped the scope together.

Talk about Local AI use cases