Getting Started
Configuration
All configuration lives in merlin.toml in your repository root. Every setting has a sensible default — only override what you need.
Full reference
merlin.toml
toml# ── AI Provider ─────────────────────────────────────────────────────[ai]# "anthropic" | "openai" | "claude-code" | "gemini" | "bedrock"# "azure-openai" | "ollama" | "groq" | "together-ai" | "deep-seek"# "mistral" | "open-router"provider = "anthropic"model = "claude-sonnet-4-6"max_tokens = 4096temperature = 0.2# Provider-specific overrides (uncomment as needed)# ollama_base_url = "http://localhost:11434"# bedrock_region = "us-east-1"# azure_openai_endpoint = "https://my-resource.openai.azure.com"# azure_openai_deployment = "my-gpt4o-deployment"# ── Review behaviour ─────────────────────────────────────────────────[review]focus = ["bugs", "security", "style", "performance"]max_comments = 30 # cap inline comments per reviewchunk_lines = 200 # lines per diff chunk sent to AIreflect = false # second-pass comment refinement# ── RAG pipeline ──────────────────────────────────────────────────────[rag]enabled = falseembedder = "openai" # "openai" | "ollama"embed_model = "text-embedding-3-small"# "local" | "memory" | "qdrant" | "chroma" | "pinecone"store = "local"collection = "merlin"top_k = 5min_score = 0.70chunk_lines = 80index_extensions = [".rs", ".ts", ".py", ".go", ".java", ".md"]local_path = "merlin-rag.jsonl"# Ollama embedder# embedder = "ollama"# embed_model = "nomic-embed-text"# ollama_base_url = "http://localhost:11434"# Qdrant# qdrant_url = "http://localhost:6333"# qdrant_api_key = "" # for Qdrant Cloud# ChromaDB# chroma_url = "http://localhost:8000"# Pinecone# pinecone_host = "https://my-index-xyz.svc.us-east1.pinecone.io"# pinecone_api_key = "" # or set PINECONE_API_KEY env var# ── Autonomous agent ──────────────────────────────────────────────────[agent]max_iterations = 10max_memory_messages = 50# memory_file = ".merlin-memory.jsonl"default_channel = "cli"port = 8090
AI providers
| Provider | provider value | Key env var |
|---|---|---|
| Anthropic Claude | anthropic | ANTHROPIC_API_KEY |
| OpenAI GPT-4o | openai | OPENAI_API_KEY |
| Google Gemini | gemini | GEMINI_API_KEY |
| AWS Bedrock | bedrock | AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY |
| Azure OpenAI | azure-openai | AZURE_OPENAI_API_KEY |
| Claude Code CLI | claude-code | CLAUDE_CODE_TOKEN (headless CI) |
| Groq | groq | GROQ_API_KEY |
| Together AI | together-ai | TOGETHER_API_KEY |
| DeepSeek | deep-seek | DEEPSEEK_API_KEY |
| Mistral AI | mistral | MISTRAL_API_KEY |
| OpenRouter | open-router | OPENROUTER_API_KEY |
| Ollama (local) | ollama | none — local server |
Environment variables
Secrets are always read from environment variables — never put them in merlin.toml.
AI providers
| Variable | Provider |
|---|---|
ANTHROPIC_API_KEY | Anthropic Claude |
OPENAI_API_KEY | OpenAI (review and/or RAG embeddings) |
GEMINI_API_KEY | Google Gemini |
AZURE_OPENAI_API_KEY | Azure OpenAI |
AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY | AWS Bedrock |
AWS_SESSION_TOKEN | AWS Bedrock (temporary credentials) |
CLAUDE_CODE_TOKEN | Claude Code CLI headless auth |
GROQ_API_KEY | Groq |
TOGETHER_API_KEY | Together AI |
DEEPSEEK_API_KEY | DeepSeek |
MISTRAL_API_KEY | Mistral AI |
OPENROUTER_API_KEY | OpenRouter |
VCS platforms
| Variable | Platform | Notes |
|---|---|---|
GITHUB_TOKEN | GitHub | Auto-provided by Actions |
GITLAB_TOKEN | GitLab | Use $CI_JOB_TOKEN in CI |
BITBUCKET_TOKEN | Bitbucket | Use $BITBUCKET_STEP_TOKEN in CI |
AZURE_DEVOPS_TOKEN | Azure DevOps | Use $(System.AccessToken) in CI |
GITEA_TOKEN | Gitea | Auto-provided by Gitea Actions 1.21+ |
Integrations
| Variable | Purpose |
|---|---|
PINECONE_API_KEY | Pinecone vector store |
SNYK_TOKEN | Snyk dependency scanning (/snyk command) |
JIRA_TOKEN | Jira issue linking (/link_jira command) |
LINEAR_API_KEY | Linear issue linking (/link_linear command) |
SLACK_BOT_TOKEN | Slack agent channel |
DISCORD_BOT_TOKEN | Discord agent channel |
DISCORD_CHANNEL_ID | Discord channel to post in |
MERLIN_GITHUB_SECRET | HMAC secret for GitHub webhook verification |
MERLIN_GITLAB_SECRET | Token for GitLab webhook verification |
CLI flags
shell
# Use a different config file$ merlin --config /path/to/custom.toml review# Override output format$ merlin review --output json$ merlin run /describe --output json# Local review (no platform API)$ merlin review --diff changes.diff