TL;DR
- Install via
pipx install llm(recommended) orpip install llm - Configure API keys in
~/.llm/configor viallm keys set - Run prompts with
llm "Explain quantum computing" - Extend with plugins like
llm-ollamaorllm-claude - Store conversations in SQLite for audit trails ([AI Security](https://hyperion-<a href="/services/coaching-vs-consulting">consulting</a>.io/services/cybersecurity-for-ai) Posture Framework™ COMPLY phase)
- Pipe data from files or commands for workflow automation
1. Installation
LLM works on macOS, Linux, and Windows (via WSL). Use pipx to avoid dependency conflicts:
# Install pipx if you don't have it
python3 -m pip install --user pipx
python3 -m pipx ensurepath
# Install LLM
pipx install llm
# Verify
llm --version
# Output: llm, version 0.15.0
Gotcha: If you get command not found, ensure ~/.local/bin is in your PATH.
2. Configure API Keys
LLM supports 50+ models via plugins. First, set up API keys for your preferred providers:
# OpenAI (GPT-4, etc.)
llm keys set openai
# Paste your key when prompted
# Anthropic (Claude)
llm keys set anthropic
Config file location: ~/.llm/config
Example config for multiple providers:
# ~/.llm/config
models:
- name: gpt4
model_id: gpt-4-turbo
api_key: ${OPENAI_API_KEY}
- name: claude3
model_id: claude-3-opus-20240229
api_key: ${ANTHROPIC_API_KEY}
Pro Tip: Use environment variables for keys (AI Security Posture Framework™ PROTECT phase):
export OPENAI_API_KEY="sk-..."
llm "Summarize this document" --key env
3. Run Your First Prompt
Basic usage:
llm "Explain Kubernetes in 3 bullet points"
Expected output:
- Container orchestration platform for automating deployment, scaling, and management
- Uses declarative configuration (YAML) to define desired state
- Components include control plane (API server, scheduler) and worker nodes (kubelet, kube-proxy)
Model selection:
# List available models
llm models
# Use a specific model
llm "Write a Python function to parse JSON" -m claude3
4. Plugin Ecosystem
Install plugins to add model support:
# Ollama (local models)
llm install llm-ollama
llm ollama pull llama3
llm -m ollama-llama3 "Explain LLMs to a 5-year-old"
# Google Gemini
llm install llm-gemini
llm keys set gemini
llm -m gemini-1.5-pro "Analyze this dataset" < data.csv
Popular plugins:
| Plugin | Command | Use Case |
|---|---|---|
llm-ollama | llm ollama | Local models (Llama, Mistral) |
llm-claude | llm -m claude3 | Anthropic models |
llm-embed | llm embed | Generate embeddings |
llm-dump | llm dump | Export conversations |
5. Conversation History & Templates
LLM stores all interactions in SQLite (AI Security Posture Framework™ COMPLY phase):
# List conversations
llm logs
# Resume a conversation
llm continue 123
# Save a prompt template
llm templates add explain "Explain {topic} in simple terms"
llm explain --topic "blockchain"
Database location: ~/.llm/logs.db
Query it directly:
sqlite3 ~/.llm/logs.db "SELECT prompt, response FROM logs LIMIT 5"
6. Piping Data & Shell Integration
Pipe data from files or commands:
# Analyze a file
llm "Summarize this code" < app.py
# Chain with other tools
curl -s https://api.github.com/repos/simonw/llm | llm "Extract top 3 features"
# Generate commit messages
git diff | llm "Write a concise commit message"
Gotcha: For large inputs, use --no-stream to avoid rate limits:
llm --no-stream "Analyze this 10MB log file" < server.log
7. Build Custom Workflows
Combine LLM with other tools for automation:
Example 1: Code Review Bot
#!/bin/bash
git diff | llm -m claude3 "Review this diff for security issues. Output as markdown."
Example 2: Automated Documentation
# Generate docs from docstrings
llm "Write Sphinx documentation for this Python file" < module.py > docs.rst
Example 3: AI Security Posture Framework™ DETECT Phase
# Monitor logs for anomalies
tail -f /var/log/nginx/access.log | \
llm "Flag suspicious HTTP requests. Output as CSV with columns: timestamp, ip, reason" \
> security_alerts.csv
What's Next?
- Explore plugins: Install
llm-embedto generate embeddings for your documents. - Automate workflows: Create a script to analyze daily logs with
llm+cron. - Audit your usage: Query
~/.llm/logs.dbto review past interactions for compliance (AI Security Posture Framework™ COMPLY phase).
For teams looking to operationalize LLM with enterprise-grade security and scalability, Hyperion Consulting offers specialized AI tools consulting to implement solutions like this across your organization.
