Context Footprint
Approximate token footprint for each skill and agent when loaded into context.
How It Works
When you invoke a skill (e.g., /cf-plan), its full SKILL.md file is injected into the conversation prompt. This is the skill's context footprint — the amount of your context window it occupies just by being loaded.
The context footprint measures only the tokens inserted into the prompt when a skill or agent is loaded. It does not reflect the total tokens consumed during execution, which depends on how much code is read, how many tool calls are made, and how long the conversation runs.
Bootstrap context (~1,800 tokens) is loaded every session automatically. This includes the list of available skills, agents, activation signals, and conventions.
Agents run in forked sessions with their own context window, so they don't consume your main conversation's context.
Tier System
Each skill and agent is classified into a tier based on the approximate token count of its definition:
| Tier | Icon | Token Range | Meaning |
|---|---|---|---|
| Low | < 1,000 tokens | Lightweight — small prompt footprint | |
| Medium | 1,000 – 2,500 tokens | Moderate — standard prompt footprint | |
| High | > 2,500 tokens | Heavy — large prompt footprint |
Skills
Slash Commands
| Command | Context | Approx. Tokens | Description |
|---|---|---|---|
| /cf-plan | High>2.5K tokens injected into prompt | ~3,600 | Brainstorm and plan |
| /cf-scan | High>2.5K tokens injected into prompt | ~3,300 | Scan project and bootstrap memory |
| /cf-remember | High>2.5K tokens injected into prompt | ~2,600 | Capture project knowledge |
| /cf-warm | High>2.5K tokens injected into prompt | ~2,600 | Catch up after absence — git history summary |
| /cf-fix | High>2.5K tokens injected into prompt | ~2,600 | Quick bug fix workflow |
| /cf-teach | Medium~1K–2.5K tokens injected into prompt | ~2,500 | Personal teacher — conversational storytelling breakdown |
| /cf-learn | Medium~1K–2.5K tokens injected into prompt | ~2,500 | Extract human learning docs |
| /cf-ask | Medium~1K–2.5K tokens injected into prompt | ~2,300 | Quick Q&A about codebase |
| /cf-review-in | Medium~1K–2.5K tokens injected into prompt | ~2,300 | Collect external review results |
| /cf-help | Medium~1K–2.5K tokens injected into prompt | ~2,000 | Answer questions about Coding Friend |
| /cf-research | Medium~1K–2.5K tokens injected into prompt | ~1,900 | In-depth research |
| /cf-review | Medium~1K–2.5K tokens injected into prompt | ~1,700 | Multi-layer code review |
| /cf-optimize | Medium~1K–2.5K tokens injected into prompt | ~1,700 | Structured optimization |
| /cf-session | Medium~1K–2.5K tokens injected into prompt | ~1,100 | Save/load sessions |
| /cf-commit | Medium~1K–2.5K tokens injected into prompt | ~1,000 | Smart conventional commits |
| /cf-review-out | Medium~1K–2.5K tokens injected into prompt | ~1,000 | Generate review prompt for external AI |
| /cf-ship | Low<1K tokens injected into prompt | ~760 | Verify, commit, push, PR |
Auto-Invoked Skills
| Skill | Context | Approx. Tokens | Activates When |
|---|---|---|---|
| cf-sys-debug | Medium~1K–2.5K tokens injected into prompt | ~1,700 | Debugging issues |
| cf-tdd | Medium~1K–2.5K tokens injected into prompt | ~1,200 | Writing new code |
| cf-verification | Low<1K tokens injected into prompt | ~575 | Before claiming done |
Agents
Agents run in forked sessions — they get their own context window and don't consume your main conversation's context. Token counts here refer to the size of the agent's system prompt injected into its forked context.
| Agent | Context | Approx. Tokens | Model | Purpose |
|---|---|---|---|---|
cf-reviewer | High>2.5K tokens injected into prompt | ~2,500 | Opus | Multi-layer review |
cf-writer | Medium~1K–2.5K tokens injected into prompt | ~1,000 | Haiku | Lightweight doc writing |
cf-writer-deep | Low<1K tokens injected into prompt | ~989 | Sonnet | Deep reasoning docs |
cf-planner | Low<1K tokens injected into prompt | ~969 | Opus | Task decomposition |
cf-explorer | Low<1K tokens injected into prompt | ~953 | Haiku | Codebase exploration |
cf-implementer | Low<1K tokens injected into prompt | ~658 | Opus | TDD implementation |
Notes
- Token counts are approximate, calculated using
@lenml/tokenizer-claude. Actual Anthropic token counts may vary by ~20-30%. - Tier classification is what matters — small variations in exact count don't change the tier.
- These numbers reflect only the prompt injection size (the skill/agent definition loaded into context), not the total tokens used during execution.
- Custom skill guides (
.coding-friend/skills/<name>-custom/SKILL.md) add to the skill's prompt footprint when loaded. - Bootstrap context (~1,800 tokens) is always present regardless of which skill you use.