TL;DR
- Install Continue in VS Code (
code --install-extension continue.continue) or JetBrains (via marketplace) Source: VS Code Marketplace Source: JetBrains Marketplace - Configure models in
~/.continue/config.json(OpenAI, Anthropic, Ollama, or local LLMs) Source: Model Setup - Use tab autocomplete, slash commands (
/edit,/test), and context providers (@file,@codebase) Source: Commands List Source: Context Docs - Customize workflows with agents (v0.9.0+) and enterprise SSO (v0.9.5+) Source: Agents Docs Source: Enterprise Guide
- Free tier includes local LLMs; Pro ($20/user/month) unlocks unlimited cloud chat Source: Pricing Page
## 1. Installation
VS Code / Cursor
# Install via CLI
code --install-extension continue.continue
Expected Output: A new sidebar icon (🔄) appears. Click it to open the Continue panel.
Gotcha:
- If the extension fails to load, ensure you’re on VS Code ≥1.80 Source: Official Guide.
- Restart VS Code after installation.
JetBrains (IntelliJ, PyCharm, etc.)
- Open Settings > Plugins.
- Search for "Continue" and install Source: JetBrains Marketplace.
- Restart the IDE.
Gotcha:
- JetBrains plugin may freeze on large projects. Disable "Power Save Mode" in
File > Power Save ModeSource: JetBrains Setup.
Neovim
Add to your lazy.nvim config:
{
"continuedev/continue.nvim",
config = function()
require("continue").setup({
-- Optional: Configure default model here
default_model = "ollama/codellama",
})
end,
}
Expected Output:
Run :Continue to open the chat interface Source: Neovim Docs.
## 2. Configuring Models
Continue uses a config.json file (located at ~/.continue/config.json). Here’s how to set up common models:
OpenAI (GPT-4)
{
"models": [
{
"title": "GPT-4",
"provider": "openai",
"model": "gpt-4-turbo",
"apiKey": "YOUR_API_KEY"
}
]
}
Where to get the API key:
Anthropic (Claude 3)
{
"models": [
{
"title": "Claude 3",
"provider": "anthropic",
"model": "claude-3-opus-20240229",
"apiKey": "YOUR_API_KEY"
}
]
}
Gotcha:
- Anthropic’s free tier has strict rate limits (50 messages/month) Source: Continue Docs.
Ollama (Local LLMs)
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh ollama pull codellama - Configure in
config.json:{ "models": [ { "title": "CodeLlama", "provider": "ollama", "model": "codellama" } ] }
Expected Output:
$ ollama run codellama
>>> Hello
Hello! How can I help you today?
LM Studio (Local Models)
- Download LM Studio from lmstudio.ai.
- Load a model (e.g.,
mistral-7b-instruct). - Configure in
config.json:{ "models": [ { "title": "Mistral", "provider": "lmstudio", "model": "mistral-7b-instruct", "apiBase": "http://localhost:1234/v1" } ] }
## 3. Tab Autocomplete Setup
Enable autocomplete in config.json:
{
"tabAutocompleteModel": {
"title": "Tab Autocomplete",
"provider": "ollama",
"model": "starcoder2:3b"
},
"tabAutocompleteOptions": {
"maxPromptTokens": 2048,
"debounceDelay": 500
}
}
How to use:
- Start typing code (e.g.,
def hello_world():). - Press
Tabto accept suggestions Source: Demo.
Gotcha:
- Autocomplete may lag on large files (>1K lines). Reduce
maxPromptTokensto 1024 Source: Performance Tips.
## 4. Chat and Inline Editing
Basic Chat
- Open the Continue sidebar (🔄).
- Ask a question:
Explain how this React component works. - Highlight code and click "Ask Continue" in the context menu Source: Chat Docs.
Inline Editing with /edit
- Highlight code and type
/editin the chat. - Describe the change:
Refactor this function to use async/await. - Press
Enterto apply the edit.
Example:
// Before
function fetchData() {
return fetch("/api/data").then(res => res.json());
}
// After (inline edit)
async function fetchData() {
const res = await fetch("/api/data");
return res.json();
}
## 5. Context Providers
Context providers let you reference files, docs, or entire codebases in chat.
@file
Reference a specific file:
@file src/utils/helpers.js Explain this file.
@docs
Pull in documentation (e.g., internal wiki):
{
"contextProviders": [
{
"name": "docs",
"params": {
"url": "https://your-company.github.io/docs"
}
}
]
}
Usage:
@docs How do we handle authentication?
@codebase
Index the entire codebase (requires continue-index):
npm install -g @continuedev/continue-index
continue-index --path ./my-project
Usage:
@codebase Where is the user model defined?
Gotcha:
@codebaseindexing can take minutes for large projects (>50K files). Use.continueignoreto exclude directories Source: Context Docs.
## 6. Custom Slash Commands
Define custom commands in config.json:
{
"customCommands": [
{
"name": "generate-docs",
"description": "Generate JSDoc for a function",
"prompt": "Write JSDoc for this function:\n\n{{selectedCode}}",
"slashCommand": "/docs"
}
]
}
Usage:
- Highlight a function.
- Type
/docsin the chat Source: Commands List.
Example Output:
/**
* Fetches data from the API.
* @async
* @returns {Promise<Object>} The JSON response from the API.
*/
async function fetchData() { ... }
## 7. config.json Deep Dive
Here’s a full example with enterprise features (SSO, agents):
{
"models": [
{
"title": "GPT-4 (Enterprise)",
"provider": "openai",
"model": "gpt-4-turbo",
"apiKey": "${OPENAI_API_KEY}",
"apiBase": "https://api.openai.com/v1"
},
{
"title": "Ollama (Local)",
"provider": "ollama",
"model": "codellama"
}
],
"tabAutocompleteModel": {
"title": "Starcoder2",
"provider": "ollama",
"model": "starcoder2:3b"
},
"contextProviders": [
{
"name": "docs",
"params": {
"url": "https://your-company.github.io/docs"
}
}
],
"customCommands": [
{
"name": "generate-tests",
"description": "Write unit tests for this function",
"prompt": "Write Jest tests for:\n\n{{selectedCode}}",
"slashCommand": "/test"
}
],
"enterprise": {
"ssoProvider": "okta",
"ssoDomain": "your-company.okta.com",
"auditLogs": true
},
"agents": [
{
"name": "refactor-agent",
"description": "Refactor code and update all references",
"steps": [
{
"name": "identify-changes",
"prompt": "List all files that need updates for this refactor:\n\n{{selectedCode}}"
},
{
"name": "apply-changes",
"prompt": "Update the following files:\n\n{{step1Output}}"
}
]
}
]
}
Key Fields:
| Field | Purpose |
|---|---|
models | List of LLMs (local/cloud) Source: Model Setup. |
tabAutocomplete | Configure tab suggestions Source: Demo. |
contextProviders | Add @file, @docs, @codebase Source: Context Docs. |
customCommands | Define /slash commands Source: Commands List. |
enterprise | SSO, audit logs (Pro/Enterprise) Source: Enterprise Guide. |
agents | Multi-step workflows (v0.9.0+) Source: Agents Docs. |
## Comparison to Alternatives
| Feature | Continue | GitHub Copilot | Cursor |
|---|---|---|---|
| Open Source | ✅ Yes Source: GitHub Releases | ❌ No Source: CodeWhisperer Docs | ❌ No Source: Cursor Comparison |
| Local LLMs | ✅ (Ollama, LM Studio) Source: Model Setup | ❌ (Cloud-only) Source: Continue vs. Copilot | ✅ (via Continue) Source: Cursor Comparison |
| Agentic Workflows | ✅ (v0.9.0+) Source: Agents Docs | ❌ (Limited) Source: Continue vs. Copilot | ❌ (Basic) Source: Cursor Comparison |
| Enterprise SSO | ✅ (v0.9.5+) Source: Enterprise Guide | ✅ Source: Continue vs. Copilot | ❌ Source: Cursor Comparison |
| Pricing | Free + $20/user/month Source: Pricing Page | $10–$20/user/month [Source: Continue vs. Cop |
