Configuration
Catalyst uses a two-layer configuration system that keeps secrets out of git while allowing
project metadata to be shared with your team. The setup script (setup-catalyst.sh) generates both
layers automatically.
Project Config (.catalyst/config.json)
Section titled “Project Config (.catalyst/config.json)”Safe to commit. Contains non-sensitive project metadata that Catalyst reads to understand your project structure, ticket conventions, and workflow state mapping.
{ "catalyst": { "projectKey": "acme", "repository": { "org": "acme-corp", "name": "api" }, "project": { "ticketPrefix": "ACME", "name": "Acme Corp API" }, "linear": { "teamKey": "ACME", "stateMap": { "backlog": "Backlog", "todo": "Todo", "research": "In Progress", "planning": "In Progress", "inProgress": "In Progress", "inReview": "In Review", "done": "Done", "canceled": "Canceled" } }, "thoughts": { "user": null } }}Fields
Section titled “Fields”| Field | Type | Description |
|---|---|---|
catalyst.projectKey | string | Links to the secrets config file (config-{projectKey}.json) |
catalyst.repository.org | string | GitHub organization |
catalyst.repository.name | string | Repository name |
catalyst.project.ticketPrefix | string | Linear ticket prefix (e.g., “ACME”) |
catalyst.project.name | string | Human-readable project name |
catalyst.linear.teamKey | string | Linear team identifier used in ticket IDs (e.g., “ACME” for ACME-123). Must match ticketPrefix. |
catalyst.linear.stateMap | object | Maps workflow phases to your Linear workspace state names |
catalyst.thoughts.user | string|null | HumanLayer thoughts user name |
State Map
Section titled “State Map”The stateMap controls automatic Linear status updates as you move through the development
workflow:
| Key | Updated When | Default |
|---|---|---|
backlog | Initial ticket state | Backlog |
todo | Acknowledged, unstarted | Todo |
research | Running research-codebase | In Progress |
planning | Running create-plan | In Progress |
inProgress | Running implement-plan | In Progress |
inReview | Running create-pr or describe-pr | In Review |
done | Running merge-pr | Done |
canceled | Manual cancellation | Canceled |
Set any key to null to skip that automatic transition.
stateMap values are auto-detected from Linear — when you run setup-catalyst.sh with a Linear
API token, the script fetches your team’s actual workflow states and populates stateMap with the
correct names. Manual customization is only needed for non-standard state names.
Plain-Language State Flow
Section titled “Plain-Language State Flow”In most teams, the intended meaning is:
research— Catalyst is still understanding the problem and the current codeplanning— the implementation approach is being written and reviewedinProgress— code changes are actively being madeinReview— a PR exists and is being worked through review and CIdone— the PR has merged
This is useful because the PR stage is not just “waiting on somebody else.” In Catalyst’s model,
inReview still includes active follow-up work such as fixing CI, addressing automated review
feedback, updating the PR description, and re-checking merge readiness.
GitHub Merge Rules Are Separate
Section titled “GitHub Merge Rules Are Separate”Catalyst can open PRs, watch checks, address review comments, and try to merge safely. But GitHub
decides what is actually required before main can be merged into.
Those merge requirements live in GitHub branch protection or repository rulesets, not in
.catalyst/config.json.
If you want GitHub to block merges until review is complete, configure that in GitHub:
- require pull requests for
main - require status checks before merge
- require one or more approving reviews
- require conversation resolution if review threads must be closed
- optionally enable auto-merge once those requirements pass
Catalyst should behave as if these gates matter, but only GitHub can enforce them.
Recommended GitHub Repo Settings
Section titled “Recommended GitHub Repo Settings”For most teams using Catalyst, the best default is autonomous mode: let Catalyst work the PR to completion, but make GitHub enforce the quality gates around checks and unresolved review comments.
Repository Settings
Section titled “Repository Settings”- Enable pull requests.
- Enable squash merge.
- Enable auto-merge.
- Enable automatic deletion of head branches after merge.
- Set the default branch to
main.
main Ruleset
Section titled “main Ruleset”Target refs/heads/main with an active branch ruleset that:
- blocks direct deletion
- blocks non-fast-forward pushes
- requires pull requests for changes into
main - requires review conversations to be resolved before merge
- requires status checks to pass before merge
For autonomous mode, set:
- required approving reviews:
0 - required review thread resolution:
true - required status checks:
true
This gives you a fully automated merge path where Catalyst can:
- open the PR
- wait for checks and bot comments
- fix actionable feedback
- resolve review threads
- merge once the PR is genuinely clean
without waiting for a human approval click.
For this repo shape, the recommended required check currently enabled in GitHub is:
Cloudflare Pages
Once your repository runs the following checks on every PR to main, you should add them as
required checks too:
audit-referencescheck-versionsvalidate
Cloudflare Pages covers preview deploy readiness. The other three checks are repository-owned
guardrails:
audit-referencescatches broken plugin referencescheck-versionsverifies plugin changes are releasable through Release Pleasevalidatechecks release configuration consistency
If your repository has additional always-on checks, add them too. The important rule is: only mark a
check as required if it runs on every PR to main.
Optional Human-In-The-Loop Mode
Section titled “Optional Human-In-The-Loop Mode”If you want a human signoff before merge, keep everything above and additionally set:
- required approving reviews:
1or more
That changes the operating model from autonomous shipping to human-approved shipping. Catalyst still does the same review-follow-up work, but GitHub will not allow the merge until a human reviewer approves it.
Review Expectations
Section titled “Review Expectations”The recommended operating model is:
- automated reviewers can leave comments and request fixes
- Catalyst should address actionable review feedback and resolve threads
- GitHub should block merge until required conversations and checks are complete
- human approval should be optional and controlled by the repository owner, not assumed by Catalyst
Why This Split Matters
Section titled “Why This Split Matters”Catalyst can do the work of:
- opening the PR
- waiting for checks
- reading bot and human review comments
- fixing code
- updating the PR
- attempting the merge once the PR is clean
But the repository settings are what make those expectations enforceable for every contributor, not just when Catalyst happens to be driving.
Secrets Config (~/.config/catalyst/config-{projectKey}.json)
Section titled “Secrets Config (~/.config/catalyst/config-{projectKey}.json)”Never committed. One file per project, linked by projectKey.
{ "catalyst": { "linear": { "apiToken": "lin_api_...", "teamKey": "ACME", "defaultTeam": "ACME" }, "sentry": { "org": "acme-corp", "project": "acme-web", "authToken": "sntrys_..." }, "posthog": { "apiKey": "phc_...", "projectId": "12345" }, "exa": { "apiKey": "..." } }}Integration Fields
Section titled “Integration Fields”| Integration | Required Fields | Used By |
|---|---|---|
| Linear | apiToken, teamKey | catalyst-dev, catalyst-pm |
| Sentry | org, project, authToken | catalyst-debugging |
| PostHog | apiKey, projectId | catalyst-analytics |
| Exa | apiKey | catalyst-dev (external research) |
Only configure the integrations you use. The setup script prompts for each one.
Monitor OTel Config (~/.config/catalyst/config.json)
Section titled “Monitor OTel Config (~/.config/catalyst/config.json)”The orchestration monitor reads OpenTelemetry backend endpoints from a global config file at
~/.config/catalyst/config.json. This file is separate from the per-project secrets files.
{ "otel": { "enabled": true, "prometheus": "http://localhost:9090", "loki": "http://localhost:3100" }}| Field | Type | Default | Description |
|---|---|---|---|
otel.enabled | boolean | false | Enable OTel proxy endpoints on orch-monitor |
otel.prometheus | string | null | Prometheus query URL (for /api/otel/query) |
otel.loki | string | null | Loki query URL (for /api/otel/logs) |
Environment variable overrides: OTEL_ENABLED, PROMETHEUS_URL, LOKI_URL. Env vars take
precedence over the file when both are set.
If you’re running the claude-code-otel Docker Compose stack locally, the defaults above match the standard ports. For hosted backends (Grafana Cloud, Datadog, etc.), point these URLs at your hosted Prometheus/Loki-compatible endpoints.
See Setting up the OTel stack for the full installation guide.
AI Briefing
Section titled “AI Briefing”The monitor dashboard supports AI-powered status summaries. Configuration spans both layers:
Project config (.catalyst/config.json) — opt-in toggle:
{ "catalyst": { "ai": { "enabled": true } }}Secrets config (~/.config/catalyst/config-{projectKey}.json) — provider credentials:
{ "ai": { "gateway": "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}", "provider": "anthropic", "model": "claude-haiku-4-5-20251001", "apiKey": "sk-ant-..." }}| Field | Required | Default | Description |
|---|---|---|---|
ai.enabled | Yes (project config) | false | Master toggle. No API calls when off. |
ai.gateway | Yes (secrets) | — | Cloudflare AI Gateway URL |
ai.provider | No | anthropic | AI provider: anthropic or openai |
ai.model | No | claude-haiku-4-5-20251001 | Model ID |
ai.apiKey | Yes (secrets) | — | Provider API key |
The AI briefing generates a natural-language status summary and suggests session labels based on Linear ticket context. It is on-demand (button click) or optionally auto-refreshing. Zero cost when disabled.
Worktree Setup
Section titled “Worktree Setup”Define the commands that run when creating a new worktree via /create-worktree or /orchestrate. This replaces the default auto-detected setup (dependency install + thoughts init) with full project control — like conductor.json’s lifecycle hooks.
{ "catalyst": { "worktree": { "setup": [ "humanlayer thoughts init --directory ${DIRECTORY} --profile ${PROFILE}", "humanlayer thoughts sync", "bun install" ] } }}Commands run in order, inside the new worktree directory. Each command supports variable substitution:
| Variable | Value |
|---|---|
${WORKTREE_PATH} | Absolute path to the new worktree |
${BRANCH_NAME} | Git branch name |
${TICKET_ID} | Same as branch name |
${REPO_NAME} | Repository name |
${DIRECTORY} | Thoughts directory (from catalyst.thoughts.directory or repo name) |
${PROFILE} | Thoughts profile (from catalyst.thoughts.profile or auto-detected) |
If catalyst.worktree.setup is not configured, the script falls back to auto-detected setup: make setup or bun/npm install, then humanlayer thoughts init + sync. Once you define setup, only your commands run — the auto-detection is skipped entirely.
Catalyst now pre-trusts newly created worktrees in Claude Code automatically, so you do not
need to add a separate trust-workspace.sh command to your setup array.
Orchestration Config
Section titled “Orchestration Config”Optional. Add this block to enable /orchestrate — see Orchestration for full documentation.
{ "catalyst": { "orchestration": { "worktreeDir": null, "maxParallel": 3, "hooks": { "setup": ["bun install"], "teardown": [] }, "workerCommand": "/oneshot", "workerModel": "opus", "testRequirements": { "backend": ["unit"], "frontend": ["unit"], "fullstack": ["unit"] }, "verifyBeforeMerge": true, "allowSelfReportedCompletion": false } }}| Field | Type | Default | Description |
|---|---|---|---|
worktreeDir | string|null | ~/catalyst/wt/<projectKey> | Base directory for worktrees |
maxParallel | number | 3 | Max concurrent workers |
hooks.setup | string[] | [] | Run after worktree creation (supports ${WORKTREE_PATH}, ${BRANCH_NAME}, ${TICKET_ID}, ${REPO_NAME}, ${DIRECTORY} variables) |
hooks.teardown | string[] | [] | Run before worktree removal |
workerCommand | string | /oneshot | Skill to dispatch in each worker |
workerModel | string | opus | Model for worker sessions |
testRequirements | object | See above | Required test types by scope (backend/frontend/fullstack) |
verifyBeforeMerge | boolean | true | Run adversarial verification before allowing merge |
allowSelfReportedCompletion | boolean | false | Trust worker’s self-reported completion without verification |
Archive Config
Section titled “Archive Config”Optional. Controls where orchestrator artifacts are persisted and how long they are retained.
The archive is a hybrid SQLite index plus filesystem blob store written by
catalyst-archive (see ADR-009).
Goes in the global user config at ~/.config/catalyst/config.json:
{ "archive": { "root": "~/catalyst/archives", "syncToThoughts": false, "retention": { "days": 90 } }}| Field | Type | Default | Description |
|---|---|---|---|
root | string | ~/catalyst/archives | Root directory for archived blobs. One subdirectory per orchestrator id. |
syncToThoughts | boolean | false | When true, catalyst-archive sweep also copies the top-level SUMMARY.md to thoughts/shared/handoffs/. |
retention.days | number|null | null (no prune) | Default threshold for catalyst-archive prune when --older-than is not supplied. |
Environment variables override these paths when set:
CATALYST_ARCHIVE_ROOT— overridesarchive.rootCATALYST_RUNS_DIR— orchestrator runtime source (default~/catalyst/runs)CATALYST_DB_FILE— SQLite index path (default~/catalyst/catalyst.db)CATALYST_COMMS_DIR— catalyst-comms source (default~/catalyst/comms/channels)
The archive root is created on first sweep and tolerates missing optional artifacts (e.g., a worker without a rollup fragment). Re-running the sweep is idempotent (all upserts).
Workflow Context (.catalyst/.workflow-context.json)
Section titled “Workflow Context (.catalyst/.workflow-context.json)”Auto-managed by Claude Code hooks and skills. Not committed to git.
{ "lastUpdated": "2025-10-26T10:30:00Z", "currentTicket": "PROJ-123", "orchestration": null, "mostRecentDocument": { "type": "plans", "path": "thoughts/shared/plans/...", "created": "2025-10-26T10:30:00Z", "ticket": "PROJ-123" }, "workflow": { "research": [], "plans": [], "handoffs": [], "prs": [] }}| Field | Type | Description |
|---|---|---|
currentTicket | string | null | Active ticket ID for this worktree |
orchestration | string | null | Orchestration run name (set by create-worktree.sh --orchestration). Groups orchestrator + workers for per-run telemetry via catalyst.orchestration OTel resource attribute. |
This file is what enables skill chaining — when you save research, create-plan finds it
automatically. When you save a plan, implement-plan finds it. You never need to specify file paths
between workflow phases.
Script API
Section titled “Script API”The workflow-context.sh script manages this file programmatically:
workflow-context.sh init # Create file if missingworkflow-context.sh set-ticket PROJ-123 # Set currentTicket (no document needed)workflow-context.sh set-orchestration NAME # Set orchestration run nameworkflow-context.sh add research "path" "PROJ-123" # Add document + set ticketworkflow-context.sh recent research # Get most recent document of typeworkflow-context.sh most-recent # Get most recent document (any type)workflow-context.sh ticket PROJ-123 # Get all documents for a ticketInitialization
Section titled “Initialization”The workflow context file is created automatically at several points:
- Skill prerequisites — all workflow skills call
check-project-setup.shwhich runsworkflow-context.sh init - Worktree creation —
create-worktree.shinitializes the file and setscurrentTicketfrom the worktree name (e.g., worktreeENG-123sets ticket toENG-123) - Ticket-based skills —
/oneshot PROJ-123callsset-ticketimmediately after parsing the ticket, before any research begins
OpenTelemetry Integration
Section titled “OpenTelemetry Integration”The workflow context file is also read by direnv to populate
OTEL_RESOURCE_ATTRIBUTES with the current ticket. This enables per-ticket telemetry correlation
in Claude Code’s native OpenTelemetry support.
Setup: Add a .envrc to your repo root:
source_upuse_otel_context "your-project-name"The use_otel_context function (from ~/.config/direnv/lib/otel.sh) sets these OTEL resource
attributes:
| Attribute | Source |
|---|---|
project | Argument to use_otel_context |
hostname | Machine short name |
git.branch | Current git branch |
linear.key | Ticket from branch name, fallback to currentTicket in workflow context |
source_up inherits environment from parent .envrc files (e.g., profile-based secrets at the
workspace root). When using worktrees, create-worktree.sh generates a .envrc and runs
direnv allow automatically.
direnv Setup (Recommended)
Section titled “direnv Setup (Recommended)”direnv is recommended when working across multiple repositories. It automatically loads per-directory environment variables, keeping API keys isolated between projects and populating OTel resource attributes for observability.
Installation
Section titled “Installation”brew install direnvAdd the shell hook to your profile (~/.zshrc or ~/.bashrc):
eval "$(direnv hook zsh)" # or bashLibrary Functions
Section titled “Library Functions”Catalyst ships two direnv library functions. Install them to ~/.config/direnv/lib/ so they’re
available in all .envrc files:
use_profile — loads environment variables from a named profile file:
# Loads vars from ~/.config/direnv/profiles/{name}.env# Later profiles override earlier ones.use_otel_context — sets OTEL_RESOURCE_ATTRIBUTES for telemetry correlation:
# Sets project, hostname, git.branch, linear.key, catalyst.orchestrationProfile Files
Section titled “Profile Files”Create profile files at ~/.config/direnv/profiles/ to separate credentials by project:
~/.config/direnv/profiles/├── personal.env # Global defaults (Cloudflare, AWS, PostHog)├── adva.env # Client-specific keys (Supabase, Postmark, geocoding APIs)├── slides.env # Project-specific keys (ElevenLabs, Gemini TTS)└── accounting.env # Project-specific keys (Wave, Monarch)Each file is a simple KEY=value format — no export prefix needed (direnv handles that).
Per-Project .envrc Files
Section titled “Per-Project .envrc Files”Each project root gets an .envrc file that layers profiles and sets OTel context:
use_profile personal # Base credentialsuse_profile acme # Client-specific overridesuse_otel_context "acme" # OTel resource attributesSub-directories (e.g., Conductor workspaces or worktrees) inherit from the parent:
source_up # Inherit from parent .envrcuse_otel_context "acme" # OTel context for this workspaceThe source_up directive walks up the directory tree until it finds a parent .envrc, chaining
configurations. This means worktrees and Conductor workspaces automatically get the parent project’s
API keys without duplicating them.
Why This Matters for Multi-Repo Work
Section titled “Why This Matters for Multi-Repo Work”Without direnv, API keys end up in shell profiles (.zshrc) where they’re global — every project
sees every key. With direnv profiles:
- Credentials are scoped —
cdinto a project and only its keys are loaded - OTel attributes are automatic — every Claude Code session gets the right
projectandlinear.keylabels without manual configuration - Worktrees inherit —
source_upmeans new worktrees get the right environment immediately - No secret leakage —
.envrcfiles are committed (they reference profiles, not secrets); profile.envfiles are local-only
Thoughts System
Section titled “Thoughts System”The thoughts system provides git-backed persistent context across sessions. The setup script handles initialization, but for manual setup:
cd /path/to/your-projecthumanlayer thoughts init
# Or with a specific profile for multi-project isolationhumanlayer thoughts init --profile acmeDirectory structure:
<org_root>/├── thoughts/ # Shared by all org projects│ ├── repos/│ │ ├── project-a/│ │ │ ├── {your_name}/│ │ │ └── shared/│ │ └── project-b/│ └── global/├── project-a/│ └── thoughts/ # Symlinks to ../thoughts/repos/project-a/└── project-b/ └── thoughts/ # Symlinks to ../thoughts/repos/project-b/Syncing and Backup
Section titled “Syncing and Backup”humanlayer thoughts sync # Sync changeshumanlayer thoughts status # Check statushumanlayer thoughts sync -m "Updated research" # Sync with message
# Back up to GitHubcd <org_root>/thoughtsgh repo create my-thoughts --private --source=. --pushSwitching Projects
Section titled “Switching Projects”Change projectKey in .catalyst/config.json to point to a different secrets file:
{ "catalyst": { "projectKey": "work" }}For fully isolated multi-client setups, see Multi-Project Setup.
Troubleshooting
Section titled “Troubleshooting”Config not being read
Section titled “Config not being read”- File exists:
ls .catalyst/config.json - Valid JSON:
cat .catalyst/config.json | jq - Correct location: must be in the
.catalyst/directory (or.claude/for backward compat) - Secrets file exists:
ls ~/.config/catalyst/config-{projectKey}.json
Thoughts not syncing
Section titled “Thoughts not syncing”humanlayer thoughts statushumanlayer thoughts init # Re-initialize if needed