One AI can miss things. A council of AIs catches more.
When all models agree, you can be confident it's a real issue. Fix it.
Most models flagged this. Probably worth investigating.
Models conflict. You know exactly where to focus your judgment.
Your code goes through multiple models simultaneously
Paste or review git
Running in parallel
Group by location
Unanimous/majority/split
Each tool runs your input through multiple AI models in parallel
Run reviews from command line or automate in CI/CD
# Generate GitHub Actions workflow (with inline PR comments)
npx @klitchevo/code-council setup workflow
# Generate simple markdown workflow
npx @klitchevo/code-council setup workflow --simple
# Overwrite existing workflow
npx @klitchevo/code-council setup workflow --force
Creates .github/workflows/code-council-review.yml automatically.
# Review git changes
npx @klitchevo/code-council review git --review-type diff
# Review with inline PR comments format
npx @klitchevo/code-council review git --format pr-comments
# Review code from stdin
echo "function foo() {}" | npx @klitchevo/code-council review code
# Review with custom models
npx @klitchevo/code-council review git \
--models "anthropic/claude-sonnet-4,openai/gpt-4o"
Findings appear as inline comments on the exact lines of code in your PR. Code fixes use GitHub's suggestion syntax for one-click apply.
name: Code Council Review
on:
pull_request:
types: [opened, synchronize, ready_for_review]
jobs:
review:
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Run Code Council Review
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
run: |
npx @klitchevo/code-council review git \
--review-type diff \
--format pr-comments \
> review.json
- name: Post Inline Review
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh api repos/${{ github.repository }}/pulls/${{ github.event.pull_request.number }}/reviews \
--method POST --input review.json
Code suggestions use GitHub's native syntax with "Apply suggestion" button. Re-runs automatically clean up old comments.
| Option | Description |
|---|---|
--models "m1,m2" | Override default models (comma-separated) |
--format | Output format: markdown (default), json, html, pr-comments |
--context "..." | Additional context for the review |
--file path | Read input from file instead of stdin |
--review-type | For git: staged, unstaged, diff, commit |
--commit hash | Commit hash (for commit review type) |
--framework | Frontend framework (react, vue, svelte) |
--language | Programming language for backend reviews |
| Command | Description |
|---|---|
setup workflow | Generate GitHub Actions workflow for PR reviews |
setup workflow --simple | Use markdown format (no inline comments) |
setup workflow --force | Overwrite existing workflow file |
init | Generate config file (.code-council/config.ts) |
Config files, environment variables, and customization
# TypeScript config (recommended)
npx @klitchevo/code-council init
# JavaScript in project root
npx @klitchevo/code-council init --js --root
Creates .code-council/config.ts with full type support.
import { defineConfig } from "@klitchevo/code-council/config";
export default defineConfig({
models: {
codeReview: ["anthropic/claude-sonnet-4"],
backendReview: ["openai/gpt-4o"],
},
llm: {
temperature: 0.3,
maxTokens: 16384,
},
});
| Variable | Description |
|---|---|
OPENROUTER_API_KEY | Required. Your OpenRouter API key |
CODE_REVIEW_MODELS | Models for general reviews (JSON array) |
FRONTEND_REVIEW_MODELS | Models for frontend reviews |
BACKEND_REVIEW_MODELS | Models for backend reviews |
TEMPERATURE | Response randomness (0.0-2.0, default: 0.3) |
MAX_TOKENS | Max response tokens (default: 16384) |
Choose models based on your needs and budget
We picked 4 models that are cheap, fast, and different enough from each other to catch different things. All available through OpenRouter at a fraction of the cost of frontier models. You can swap in Claude or GPT-4 anytime.
Chinese lab model with strong code understanding. Fast inference and very cheap. Good at spotting logical errors and edge cases in business logic.
minimax/minimax-m2.5
Zhipu AI's latest model. Trained on diverse multilingual data, which means it sometimes catches patterns Western-trained models miss. Solid at documentation and naming issues.
z-ai/glm-4.7
Moonshot AI's flagship. Known for strong reasoning and 128k context window. Handles large files well and tends to be thorough - sometimes catches security issues others miss.
moonshotai/kimi-k2.5
One of the best open-weight models available. Competitive with GPT-4 on coding benchmarks at a fraction of the price. Strong at architecture and design pattern analysis.
deepseek/deepseek-v3.2
["minimax/minimax-m2.5", "deepseek/deepseek-v3.2"]
["anthropic/claude-sonnet-4", "openai/gpt-4o"]
["anthropic/claude-opus-4.5", "openai/gpt-4o"]
Add to your Claude Desktop config and start reviewing
{
"mcpServers": {
"code-council": {
"command": "npx",
"args": ["-y", "@klitchevo/code-council"],
"env": {
"OPENROUTER_API_KEY": "your-api-key"
}
}
}
}
Get your API key at openrouter.ai/keys