Context Footprint

Approximate token footprint for each skill and agent when loaded into context.

How It Works

When you invoke a skill (e.g., /cf-plan), its full SKILL.md file is injected into the conversation prompt. This is the skill's context footprint — the amount of your context window it occupies just by being loaded.

The context footprint measures only the tokens inserted into the prompt when a skill or agent is loaded. It does not reflect the total tokens consumed during execution, which depends on how much code is read, how many tool calls are made, and how long the conversation runs.

Bootstrap context (~1,800 tokens) is loaded every session automatically. This includes the list of available skills, agents, activation signals, and conventions.

Agents run in forked sessions with their own context window, so they don't consume your main conversation's context.

Tier System

Each skill and agent is classified into a tier based on the approximate token count of its definition:

TierIconToken RangeMeaning
Low< 1,000 tokensLightweight — small prompt footprint
Medium1,000 – 2,500 tokensModerate — standard prompt footprint
High> 2,500 tokensHeavy — large prompt footprint

Skills

Slash Commands

CommandContextApprox. TokensDescription
/cf-planHigh>2.5K tokens injected into prompt~3,600Brainstorm and plan
/cf-scanHigh>2.5K tokens injected into prompt~3,300Scan project and bootstrap memory
/cf-rememberHigh>2.5K tokens injected into prompt~2,600Capture project knowledge
/cf-warmHigh>2.5K tokens injected into prompt~2,600Catch up after absence — git history summary
/cf-fixHigh>2.5K tokens injected into prompt~2,600Quick bug fix workflow
/cf-teachMedium~1K–2.5K tokens injected into prompt~2,500Personal teacher — conversational storytelling breakdown
/cf-learnMedium~1K–2.5K tokens injected into prompt~2,500Extract human learning docs
/cf-askMedium~1K–2.5K tokens injected into prompt~2,300Quick Q&A about codebase
/cf-review-inMedium~1K–2.5K tokens injected into prompt~2,300Collect external review results
/cf-helpMedium~1K–2.5K tokens injected into prompt~2,000Answer questions about Coding Friend
/cf-researchMedium~1K–2.5K tokens injected into prompt~1,900In-depth research
/cf-reviewMedium~1K–2.5K tokens injected into prompt~1,700Multi-layer code review
/cf-optimizeMedium~1K–2.5K tokens injected into prompt~1,700Structured optimization
/cf-sessionMedium~1K–2.5K tokens injected into prompt~1,100Save/load sessions
/cf-commitMedium~1K–2.5K tokens injected into prompt~1,000Smart conventional commits
/cf-review-outMedium~1K–2.5K tokens injected into prompt~1,000Generate review prompt for external AI
/cf-shipLow<1K tokens injected into prompt~760Verify, commit, push, PR

Auto-Invoked Skills

SkillContextApprox. TokensActivates When
cf-sys-debugMedium~1K–2.5K tokens injected into prompt~1,700Debugging issues
cf-tddMedium~1K–2.5K tokens injected into prompt~1,200Writing new code
cf-verificationLow<1K tokens injected into prompt~575Before claiming done

Agents

Agents run in forked sessions — they get their own context window and don't consume your main conversation's context. Token counts here refer to the size of the agent's system prompt injected into its forked context.

AgentContextApprox. TokensModelPurpose
cf-reviewerHigh>2.5K tokens injected into prompt~2,500OpusMulti-layer review
cf-writerMedium~1K–2.5K tokens injected into prompt~1,000HaikuLightweight doc writing
cf-writer-deepLow<1K tokens injected into prompt~989SonnetDeep reasoning docs
cf-plannerLow<1K tokens injected into prompt~969OpusTask decomposition
cf-explorerLow<1K tokens injected into prompt~953HaikuCodebase exploration
cf-implementerLow<1K tokens injected into prompt~658OpusTDD implementation

Notes

  • Token counts are approximate, calculated using @lenml/tokenizer-claude. Actual Anthropic token counts may vary by ~20-30%.
  • Tier classification is what matters — small variations in exact count don't change the tier.
  • These numbers reflect only the prompt injection size (the skill/agent definition loaded into context), not the total tokens used during execution.
  • Custom skill guides (.coding-friend/skills/<name>-custom/SKILL.md) add to the skill's prompt footprint when loaded.
  • Bootstrap context (~1,800 tokens) is always present regardless of which skill you use.