Skip to content

Best Practices

These best practices are derived from Anthropic’s context engineering principles and tested across real-world projects.

  1. Overview — What and why
  2. Current state analysis — What exists now
  3. Desired end state — Clear success definition
  4. What we’re NOT doing — Explicit scope control
  5. Phases — Logical, incremental steps
  6. Success criteria — Separated into automated and manual

Automated — Can be run by agents, deterministic pass/fail:

#### Automated Verification
- [ ] Unit tests pass: `make test-unit`
- [ ] Type checking: `npm run typecheck`
- [ ] API returns 429: `curl -X POST http://localhost:8080/api/test`

Manual — Requires human testing, subjective assessment:

#### Manual Verification
- [ ] Error message is user-friendly
- [ ] Performance acceptable with 10,000 requests
- [ ] Mobile app handles 429 gracefully

Resolve all decisions during planning, not during implementation:

<!-- Bad -->
- Use Redis or maybe in-memory? Need to decide.
<!-- Good -->
- Use Redis (multi-instance deployment requires shared state)
## What We're NOT Doing
- Not implementing per-endpoint rate limits (global only)
- Not adding a configuration UI (code config only)
- Not handling distributed rate limiting across regions

Plans are guides, not rigid scripts. When reality differs:

  • File moved? Adapt to the new location.
  • Better pattern found? Use it, document the deviation.
  • Core approach invalid? Stop and ask before proceeding.

Don’t wait until the end to verify:

Phase 1: Implement → Run tests → Fix issues → Mark complete
Phase 2: Implement → Run tests → Fix issues → Mark complete

Use plan checkboxes to track completion. This enables resumption from any point and eliminates re-verification.

# Good
@catalyst-dev:codebase-analyzer trace how a webhook request flows
from receipt to database storage
# Bad
@catalyst-dev:codebase-analyzer look at webhooks

Use Parallel Agents for Independent Research

Section titled “Use Parallel Agents for Independent Research”
# Parallel (3x faster)
@catalyst-dev:codebase-locator find payment files
@catalyst-dev:thoughts-locator search payment research
@catalyst-dev:codebase-pattern-finder show payment patterns

Always check the codebase for existing implementations before creating new ones. Use codebase-pattern-finder to discover established conventions.

Anti-PatternWhy It’s BadBetter Approach
Loading entire codebase upfrontWastes context, includes irrelevant filesProgressive discovery with agents
Monolithic research requestsNo parallelization, unclear scopeParallel focused agents
Vague success criteriaCan’t verify completionSeparated automated/manual checks
Implementing without planningMisses existing patterns, duplicates codeResearch → plan → implement
Losing context between sessionsMust re-research everythingPersist to thoughts system
Scope creep in plansNever finishes, delays deliveryExplicit “what we’re NOT doing”
  • New plan: Starting a feature, refactoring, or complex bug fix
  • Research doc: Evaluating options, investigating patterns, documenting decisions
  • Append to existing: Updating based on new findings or progress
Research: YYYY-MM-DD-PROJ-XXXX-description.md
Plans: YYYY-MM-DD-PROJ-XXXX-description.md
Handoffs: YYYY-MM-DD_HH-MM-SS_description.md
PRs: pr_{number}_{description}.md

Run humanlayer thoughts sync after creating or updating plans, completing research, finishing implementation, and making architectural decisions.