/cf-optimize
Medium~1K–2.5K tokens injected into promptStructured optimization with before/after measurement.
Context footprint: ⚡⚡ (medium) — what does this mean?
The /cf-optimize skill systematically improves performance by measuring baseline metrics, analyzing bottlenecks, getting your confirmation on the approach, implementing optimizations with tests, and comparing before/after results.
This is an auto-invoked skill — it activates automatically when it detects performance-related context in the conversation (e.g. "this is slow", "optimize", "bottleneck", "high latency"). You can also trigger it manually with /cf-optimize.
Usage
/cf-optimize [target]
Or simply describe a performance problem — the skill activates automatically:
This endpoint is taking 3 seconds to respond
Workflow
- Detect Available Tools — Checks for installed profiling/benchmarking tools (
hyperfine,clinic,0x,lighthouse,perf). Falls back to AI-only mode if none found - Understand the Target — Read the target code and understand the current implementation. If the optimization goal is vague, asks what "better" means: faster? less memory? fewer API calls?
- Baseline Measurement — Run a benchmark 3 times using detected tools (or manual timing). Records metrics with units and environment details
- Analyze Bottlenecks — Profile the code path to identify the actual bottleneck — no guessing. Ranks bottlenecks by impact
- Plan the Optimization — Proposes 1-2 approaches for the top bottleneck with expected improvement and risks. Asks for your confirmation before implementing
- Implement — Dispatches the
cf-implementeragent with strict TDD discipline: verify existing tests pass, implement the optimization, confirm no regressions - Measure After — Runs the exact same benchmark from Step 3, again 3 times for stable numbers
- Compare & Report — Shows a before/after table with percentage changes. Flags improvements < 5% as possibly within noise. If performance regressed, reverts and tries a different approach
- Auto-Review — Automatically runs
/cf-reviewafter the optimization is verified
Examples
/cf-optimize getUserById query
/cf-optimize search endpoint response time
/cf-optimize database migration script
What Gets Measured
- Execution Time — Wall-clock and CPU time
- Memory Usage — Peak and average memory consumption
- Database Queries — Query count and slow query analysis
- Network I/O — Request/response sizes and latency
- Throughput — Requests per second (for endpoints)
Key Features
- Always Measured — Every optimization is benchmarked before and after — no "it should be faster" claims
- One at a Time — Never batches multiple optimizations, so you know exactly what helped
- TDD Integration — Enforces test-first approach via the
cf-implementeragent - Auto-Revert — Reverts if the optimization makes things worse or breaks tests
- Custom Guides — Extend behavior via
.coding-friend/skills/cf-optimize-custom/SKILL.md
Output Example
BASELINE (3 runs avg):
getUserById: 250ms avg, 15 DB queries per call
OPTIMIZATION:
- Added query result caching (Redis)
AFTER (3 runs avg):
getUserById: 45ms avg, 1 DB query per call
| Metric | Before | After | Change |
| ---------- | ------ | ----- | ----------------- |
| Avg time | 250ms | 45ms | -82% |
| DB queries | 15 | 1 | -93% |
When It Activates
The skill auto-invokes when it detects performance-related signals:
- "this is slow", "make it faster", "optimize", "speed up", "takes too long"
- "bottleneck", "high latency", "timeout", "too many queries"
- "memory leak", "reduce load time", "O(n²)", "N+1"
- Performance concerns flagged by
/cf-reviewor/cf-fix
It does NOT auto-invoke for minor refactors or style changes that are not performance-related.
Supported Profiling Tools
| Tool | Domain | Used For |
|---|---|---|
hyperfine | General | Precise CLI benchmarking with warmup and stats |
clinic | Node.js | Flamegraphs, I/O profiling |
0x | Node.js | Lightweight flamegraph generation |
lighthouse | Web | Web performance audit (LCP, FID, CLS) |
perf | Linux | CPU counters, cache misses |
webpack-bundle-analyzer | JS bundles | Bundle size visualization |
When no tools are detected, cf-optimize uses AI-only mode with manual timing and code instrumentation. Results are clearly marked as estimates.
When to Use Manually
- Performance bottlenecks identified by users
- Database queries timing out
- Page load times slowing growth
- API endpoints hitting rate limits
- Memory leaks in long-running processes