/cf-optimize

Medium~1K–2.5K tokens injected into prompt

Structured optimization with before/after measurement.

Context footprint: ⚡⚡ (medium) — what does this mean?

The /cf-optimize skill systematically improves performance by measuring baseline metrics, analyzing bottlenecks, getting your confirmation on the approach, implementing optimizations with tests, and comparing before/after results.

This is an auto-invoked skill — it activates automatically when it detects performance-related context in the conversation (e.g. "this is slow", "optimize", "bottleneck", "high latency"). You can also trigger it manually with /cf-optimize.

Usage

/cf-optimize [target]

Or simply describe a performance problem — the skill activates automatically:

This endpoint is taking 3 seconds to respond

Workflow

  1. Detect Available Tools — Checks for installed profiling/benchmarking tools (hyperfine, clinic, 0x, lighthouse, perf). Falls back to AI-only mode if none found
  2. Understand the Target — Read the target code and understand the current implementation. If the optimization goal is vague, asks what "better" means: faster? less memory? fewer API calls?
  3. Baseline Measurement — Run a benchmark 3 times using detected tools (or manual timing). Records metrics with units and environment details
  4. Analyze Bottlenecks — Profile the code path to identify the actual bottleneck — no guessing. Ranks bottlenecks by impact
  5. Plan the Optimization — Proposes 1-2 approaches for the top bottleneck with expected improvement and risks. Asks for your confirmation before implementing
  6. Implement — Dispatches the cf-implementer agent with strict TDD discipline: verify existing tests pass, implement the optimization, confirm no regressions
  7. Measure After — Runs the exact same benchmark from Step 3, again 3 times for stable numbers
  8. Compare & Report — Shows a before/after table with percentage changes. Flags improvements < 5% as possibly within noise. If performance regressed, reverts and tries a different approach
  9. Auto-Review — Automatically runs /cf-review after the optimization is verified

Examples

/cf-optimize getUserById query
/cf-optimize search endpoint response time
/cf-optimize database migration script

What Gets Measured

  • Execution Time — Wall-clock and CPU time
  • Memory Usage — Peak and average memory consumption
  • Database Queries — Query count and slow query analysis
  • Network I/O — Request/response sizes and latency
  • Throughput — Requests per second (for endpoints)

Key Features

  • Always Measured — Every optimization is benchmarked before and after — no "it should be faster" claims
  • One at a Time — Never batches multiple optimizations, so you know exactly what helped
  • TDD Integration — Enforces test-first approach via the cf-implementer agent
  • Auto-Revert — Reverts if the optimization makes things worse or breaks tests
  • Custom Guides — Extend behavior via .coding-friend/skills/cf-optimize-custom/SKILL.md

Output Example

BASELINE (3 runs avg):
  getUserById: 250ms avg, 15 DB queries per call

OPTIMIZATION:
  - Added query result caching (Redis)

AFTER (3 runs avg):
  getUserById: 45ms avg, 1 DB query per call

| Metric     | Before | After | Change            |
| ---------- | ------ | ----- | ----------------- |
| Avg time   | 250ms  | 45ms  | -82%              |
| DB queries | 15     | 1     | -93%              |

When It Activates

The skill auto-invokes when it detects performance-related signals:

  • "this is slow", "make it faster", "optimize", "speed up", "takes too long"
  • "bottleneck", "high latency", "timeout", "too many queries"
  • "memory leak", "reduce load time", "O(n²)", "N+1"
  • Performance concerns flagged by /cf-review or /cf-fix

It does NOT auto-invoke for minor refactors or style changes that are not performance-related.

Supported Profiling Tools

ToolDomainUsed For
hyperfineGeneralPrecise CLI benchmarking with warmup and stats
clinicNode.jsFlamegraphs, I/O profiling
0xNode.jsLightweight flamegraph generation
lighthouseWebWeb performance audit (LCP, FID, CLS)
perfLinuxCPU counters, cache misses
webpack-bundle-analyzerJS bundlesBundle size visualization

When no tools are detected, cf-optimize uses AI-only mode with manual timing and code instrumentation. Results are clearly marked as estimates.

When to Use Manually

  • Performance bottlenecks identified by users
  • Database queries timing out
  • Page load times slowing growth
  • API endpoints hitting rate limits
  • Memory leaks in long-running processes