Review Skills Guide
This guide explains how to create custom code review skills for Reviewflow.
Overview
What are Review Skills?
Review skills are Claude Code skills (SKILL.md files) that define how automated code reviews should be performed. They contain:
- Persona definition: Who the reviewer is and their approach
- Review criteria: What to check for in the code
- Output format: How to structure the review report
- Markers: Special tags that trigger platform actions
How Does Reviewflow Work?
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ GitLab/GitHub │───▶│ Review Automation │───▶│ Claude Code │
│ Webhook │ │ Server │ │ + Your Skill │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │ │
│ 1. MR/PR event │ 2. Queue job │ 3. Run skill
│ (review_requested)│ & invoke Claude │ against MR
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ 4. Post-processing │
│ • Parse [REVIEW_STATS:...] for metrics │
│ • Parse [THREAD_RESOLVE:...] for platform actions │
│ • Execute glab/gh commands │
└─────────────────────────────────────────────────────────────────┘Quick Start
Minimal Skill Example
Create a file at .claude/skills/my-review/SKILL.md:
---
name: my-review
description: My project's code review skill
---
# Code Review
Review the merge request for code quality issues.
## Instructions
1. Read the changed files
2. Check for obvious bugs, security issues, and code smells
3. Output a summary with blocking issues and suggestions
## Output Format
# Review Report
## Summary
[One sentence assessment]
## Blocking Issues
- Issue 1
- Issue 2
## Suggestions
- Suggestion 1
[REVIEW_STATS:blocking=X:warnings=Y:suggestions=Z:score=N]Required config.json Fields
Create .claude/reviews/config.json in your project:
{
"github": true,
"gitlab": false,
"defaultModel": "sonnet",
"reviewSkill": "my-review"
}| Field | Type | Description |
|---|---|---|
github | boolean | Enable GitHub integration |
gitlab | boolean | Enable GitLab integration |
defaultModel | "sonnet" | "opus" | Claude model to use |
reviewSkill | string | Skill name for initial reviews |
reviewFollowupSkill | string | (Optional) Skill for follow-up reviews |
Your First Review
- Create the skill file as shown above
- Create the config.json
- Assign a reviewer to your MR/PR that matches your webhook configuration
- The review will run automatically and post results
Markers Reference
Skills communicate with the automation server through markers ([MARKER_TYPE:params]) in their output. The most important marker is [REVIEW_STATS:blocking=N:warnings=N:suggestions=N:score=N] which is required for metrics tracking.
For full marker syntax, parameters, and platform commands, see Markers Reference. For the MCP tool equivalent, see MCP Tools Reference.
Skill Structure
SKILL.md Anatomy
---
name: skill-name # Unique identifier (kebab-case)
description: Brief desc # Shown in Claude Code skill list
---
# Skill Title
## Context
Define the reviewer persona and approach.
## Instructions
Step-by-step review process.
## Output Format
Expected output structure with markers.
## Examples
Sample outputs for reference.Frontmatter Fields
| Field | Required | Description |
|---|---|---|
name | Yes | Unique skill identifier (kebab-case) |
description | Yes | Brief description for skill list |
Best Practices
- Be specific: Define clear review criteria for your project
- Use markers: Always include
[REVIEW_STATS:...]for metrics - Structure output: Use consistent markdown headings
- Provide examples: Include sample outputs in your skill
Platform-Specific Commands
The server automatically translates markers to platform API calls (GitLab glab / GitHub gh). See Markers Reference for the full command mapping and thread ID format per platform.
Advanced Topics
Custom Agents for Progress Tracking
Define custom agents in your config.json:
{
"agents": [
{ "name": "architecture", "displayName": "Architecture" },
{ "name": "security", "displayName": "Security" },
{ "name": "testing", "displayName": "Testing" }
]
}Your skill should emit matching progress markers:
[PROGRESS:architecture:started]
... analysis ...
[PROGRESS:architecture:completed]Error Handling
- Thread action failures are logged but don't block subsequent actions
- Invalid thread IDs result in 404 errors (logged, not fatal)
- Always check server logs for action execution results
Testing Your Skill Locally
Run Claude Code manually:
bashcd /path/to/your/repo claude -p "/my-review 123"Check the output for:
- Correct markdown structure
- Valid markers
- Expected statistics
Verify markers are parsed:
[REVIEW_STATS:...]appears in output- Thread markers have valid IDs
Troubleshooting
See Troubleshooting for common issues. For skill-specific debugging, test locally with claude -p "/my-review 123" and grep for \[ to verify markers.
See Also
- Markers Reference - Complete markers reference
- Configuration Reference - Configuration options
- Example Skills - Example skills