TL;DR
- Install Continue in VS Code or JetBrains in 2 minutes via marketplace
- Connect to OpenAI, Anthropic, or Ollama (local) in
config.json - Use tab autocomplete and chat for code generation/editing
- Customize with slash commands (e.g.,
/explain,/test) and context providers (@file,@docs) - Enterprise? Enable SSO, audit logs, and self-hosting for compliance
1. Installation
VS Code
- Open VS Code.
- Press
Ctrl+Shift+X(Windows/Linux) orCmd+Shift+X(Mac) to open Extensions. - Search for
Continueand click Install.- Expected: Sidebar icon appears (rocket emoji).
- Verify:
code --list-extensions | grep continue # Output: continue.continue
JetBrains (IntelliJ, PyCharm, etc.)
- Open Settings (
Ctrl+Alt+S/Cmd+,). - Navigate to Plugins > Marketplace.
- Search for
Continueand click Install. - Restart the IDE.
- Expected: Continue panel appears in the right sidebar.
Gotcha: On Windows, ensure WSL2 is installed if using local models:
wsl --install
2. Configure Models
Edit ~/.continue/config.json (auto-created on first launch). Here’s a production-ready template:
{
"models": [
{
"title": "OpenAI GPT-4 Turbo",
"provider": "openai",
"model": "gpt-4-turbo",
"apiKey": "${OPENAI_API_KEY}"
},
{
"title": "Anthropic Claude 3 Opus",
"provider": "anthropic",
"model": "claude-3-opus-20240229",
"apiKey": "${ANTHROPIC_API_KEY}"
},
{
"title": "Local Ollama (Llama3)",
"provider": "ollama",
"model": "llama3"
}
],
"tabAutocompleteModel": {
"title": "Local Ollama (CodeLlama)",
"provider": "ollama",
"model": "codellama"
}
}
Key Fields:
apiKey: Use${ENV_VAR}for security (set in.bashrc/~/.zshrc).tabAutocompleteModel: Separate model for faster completions.
Verify Models:
- Open Continue sidebar.
- Click the model dropdown (top-right).
- Expected: All configured models appear.
Common Errors:
- 401 Unauthorized: Check API keys (
echo $OPENAI_API_KEY). - Ollama not found: Install Ollama first:
curl -fsSL https://ollama.com/install.sh | sh ollama pull llama3
3. Tab Autocomplete Setup
Enable in config.json:
{
"tabAutocompleteEnabled": true,
"tabAutocompleteModel": {
"title": "Ollama CodeLlama",
"provider": "ollama",
"model": "codellama"
}
}
Test It:
- Open a Python file.
- Type
def hello_world(and pressTab.- Expected: Continue suggests a function body.
Pro Tip: For faster completions, use smaller models (e.g., starcoder:1b).
4. Chat and Inline Editing
Chat Interface
- Open Continue sidebar (
Ctrl+Shift+L/Cmd+Shift+L). - Type a prompt (e.g., "Explain this React component").
- Expected: Streaming response with code snippets.
Inline Editing:
- Highlight code.
- Press
Cmd+I(Mac) /Ctrl+I(Windows/Linux). - Type an instruction (e.g., "Add error handling").
- Expected: Code updates in-place.
[AI Security](https://hyperion-<a href="/services/coaching-vs-consulting">consulting</a>.io/services/cybersecurity-for-ai) Posture Framework™ (DETECT Phase):
- Enable audit logs in
config.jsonto track LLM interactions:{ "auditLogEnabled": true, "auditLogPath": "/var/log/continue/audit.log" }- Logs include: timestamps, user IDs, and model inputs/outputs.
5. Context Providers
Use @ symbols to reference context in chat:
| Provider | Example Usage | Description |
|---|---|---|
@file | @file src/utils.js | Include file contents. |
@docs | @docs https://react.dev | Fetch documentation. |
@codebase | @codebase | Index entire project (Pro/Enterprise). |
@terminal | @terminal ls -la | Run shell commands. |
Example:
@file src/api.ts
Refactor this to use async/await instead of callbacks.
Gotcha: @codebase requires Pro/Enterprise for large projects.
6. Custom Slash Commands
Define reusable commands in config.json:
{
"customCommands": [
{
"name": "explain",
"prompt": "Explain the following code in simple terms:\n{{code}}",
"description": "Explain selected code"
},
{
"name": "test",
"prompt": "Write unit tests for:\n{{code}}\n\nUse Jest for JavaScript or pytest for Python.",
"description": "Generate tests"
}
]
}
Usage:
- Highlight code.
- Type
/explainin the chat.- Expected: Plain-English explanation.
AI Security Posture Framework™ (PROTECT Phase):
- Restrict commands via role-based access control (Enterprise):
{ "enterprise": { "allowedCommands": ["explain", "test"], "blockedCommands": ["delete", "exec"] } }
7. config.json Deep Dive
Full Enterprise Example
{
"models": [
{
"title": "Azure OpenAI GPT-4",
"provider": "openai",
"model": "gpt-4",
"apiBase": "https://your-azure-endpoint.openai.azure.com",
"apiKey": "${AZURE_OPENAI_KEY}",
"apiVersion": "2024-02-15-preview"
}
],
"tabAutocompleteEnabled": true,
"tabAutocompleteModel": {
"title": "Ollama DeepSeek Coder",
"provider": "ollama",
"model": "deepseek-coder:6.7b"
},
"customCommands": [
{
"name": "audit",
"prompt": "Audit this code for security vulnerabilities:\n{{code}}\n\nFollow OWASP Top 10 guidelines.",
"description": "Security audit"
}
],
"enterprise": {
"ssoProvider": "okta",
"auditLogEnabled": true,
"allowedModels": ["gpt-4", "claude-3-opus"]
},
"contextProviders": [
{
"name": "docs",
"params": {
"maxTokens": 4096
}
}
]
}
Key Enterprise Features:
- SSO Integration: Okta/SAML for authentication.
- Model Whitelisting: Restrict to approved models.
- Audit Logs: Compliance-ready logging (AI Security Posture Framework™ COMPLY Phase).
Alternatives Comparison
| Tool | Best For | Weaknesses | Cost (2026) |
|---|---|---|---|
| Continue | Local models, customization | Smaller community | Free–$20/user/mo |
| GitHub Copilot | GitHub integration | No local models, expensive | $10–$39/user/mo |
| Cursor | VS Code power users | No self-hosting | Free–$20/user/mo |
What's Next?
- Benchmark Models: Compare
gpt-4vs.claude-3-opusfor your use case.time continue --model gpt-4 --prompt "Write a Python Flask API" time continue --model claude-3-opus --prompt "Write a Python Flask API" - Set Up CI/CD: Add
config.jsonto your repo and validate it in CI:npm install -g @continue/cli continue validate-config - Explore Self-Hosting: Deploy Continue on-prem for <a href="/services/on-premise-ai">air-gapped</a> environments:
docker run -p 3000:3000 -v ~/.continue:/root/.continue continuedev/continue
Need help deploying Continue at scale? Hyperion Consulting offers AI tools consulting to optimize your setup for security, cost, and performance. Visit hyperion-consulting.io to learn more.
