Observable Signals
"Don't guess. Observe. Measure. Act."
HEAT works because it focuses on concrete, measurable signals rather than abstract feelings. This page catalogs what to look for, where to find it, and how to tag it.
Signal Categories
1. Work Type Signals
| Tag | Observable Trigger | Data Source | Example |
|---|---|---|---|
| Feature | New functionality task in backlog | Jira/ADO | "Implement user authentication" |
| Bug | Defect ticket, test failure, production error | Bug tracker, logs | "Fix payment processing error" |
| Blocker | Task marked "blocked", no progress for 2+ days | Task status, standup notes | "Waiting on vendor API fix" |
| Support | Helping others, code review, onboarding | PR reviews, Slack threads | "Help junior dev with setup" |
| Config | Environment issues, CI/CD fixes, tooling | Deployment logs, build failures | "Fix staging environment" |
| Research | Spike, POC, architecture decision | Epic labeled "research" | "Evaluate GraphQL vs REST" |
2. Intensity Signals
| Intensity | Physical/Mental Indicators | When to Use |
|---|---|---|
| x1 | Autopilot, effortless, done in <30 min | Routine task you've done 100+ times |
| x2-x3 | Normal focus, standard complexity | Feature work in familiar codebase |
| x5-x7 | Deep concentration, mentally taxing | Complex problem-solving, unfamiliar territory |
| x8-x10 | Exhausting, draining, frustrating | Grinding on blocker, crisis mode |
Tagging Decision Tree
Detailed Tagging Guide
Work Type Tag: Feature
When to use: Building new functionality that adds value to the product
Observable signals:
- User story labeled "Feature" in backlog
- Epic/theme is "New capability"
- Adds something that didn't exist before
- Expands product capabilities
Examples:
✅ "Implement OAuth2 authentication"
✅ "Add dark mode toggle to settings"
✅ "Create user dashboard with analytics"
✅ "Build API endpoint for data export"
✅ "Develop mobile responsive layout"
❌ "Fix broken login" (→ Bug)
❌ "Optimize slow query" (→ Bug or Config)
❌ "Update documentation" (→ Support)Typical intensity range: x2-x5
- x2-x3: Feature in familiar tech stack
- x4-x5: Feature requiring new library/pattern
- x6+: Feature with complex integration (rare)
Work Type Tag: Bug
When to use: Fixing defects, errors, or unintended behavior
Observable signals:
- Ticket type: "Bug", "Defect", "Issue"
- Something that worked before now broken
- Test failure that needs fixing
- Production error logs
Examples:
✅ "Fix null pointer exception in payment flow"
✅ "Resolve race condition in checkout"
✅ "Correct timezone display bug"
✅ "Fix memory leak in background job"
❌ "Implement error handling" (→ Feature)
❌ "Can't reproduce, investigating" (→ Blocker if stuck, Research if exploring)Typical intensity range: x3-x7
- x3-x4: Simple bug with clear root cause
- x5-x6: Bug requiring debugging, reproduction
- x7+: Intermittent or race condition bugs
When Bug becomes Blocker:
Day 1: "Fix payment bug" (Bug, x5)
Day 2: Still working on it (Bug, x6)
Day 3: Stuck, can't find root cause (Blocker, x7) 🔥 Streak: 3Work Type Tag: Blocker
When to use: Stuck, waiting, or grinding with no clear path forward
Observable signals:
- Task marked "Blocked" in PM system
- Working on same issue for 2+ days with no resolution
- Waiting on external dependency (vendor, team, approval)
- Investigation with no clear next step
Examples:
✅ "Waiting on vendor API documentation"
✅ "Can't reproduce production bug locally"
✅ "Database migration stuck, seeking help"
✅ "Integration failing, vendor support escalated"
✅ "Performance issue - tried 3 approaches, all failed"
❌ "Waiting 5 minutes for build" (not a blocker, just a pause)
❌ "Scheduled meeting tomorrow for approval" (expected delay)Typical intensity range: x5-x10
- x5-x6: Blocked, but making incremental progress
- x7-x8: Stuck, trying multiple approaches
- x9-x10: Grinding for days, mentally exhausted
Critical: If you tag Blocker for 3+ consecutive days, expect manager intervention (this is by design!)
Work Type Tag: Support
When to use: Helping others, coordination work, knowledge transfer
Observable signals:
- Code review requested
- Junior dev asks for help
- Onboarding new team member
- Answering questions in Slack/Teams
- Pair programming session
- "Quick question" that turns into 30 min
Examples:
✅ "Review Alice's PR for authentication"
✅ "Help Bob debug environment setup"
✅ "Answer team questions about Payment module"
✅ "Onboard new hire - explain codebase architecture"
✅ "Pair with junior on first feature"
❌ "Attend standup" (Meeting, not Support - track separately if needed)
❌ "Write documentation for my feature" (Part of Feature work)Typical intensity range: x1-x3
- x1: Quick code review (<15 min)
- x2: Helping someone debug (30-60 min)
- x3: Extended pairing or onboarding session
Note: High Support intensity across team = knowledge concentration issue (check Bus Factor)
Work Type Tag: Config
When to use: Environment issues, tooling problems, infrastructure work
Observable signals:
- CI/CD pipeline broken
- Local development environment not working
- Deployment script failing
- "Works on my machine" debugging
- Docker/Kubernetes configuration
- Build tool updates
Examples:
✅ "Fix staging database connection"
✅ "Update CI pipeline to use Node 20"
✅ "Debug why tests pass locally but fail in CI"
✅ "Configure new deployment environment"
✅ "Resolve npm package dependency conflict"
❌ "Add new library for feature" (→ Feature)
❌ "Optimize database query" (→ Bug or Feature)Typical intensity range: x1-x5
- x1-x2: Routine config update
- x3-x4: Environment issue requiring debugging
- x5+: Infrastructure problem affecting team (urgent)
Alert threshold: If Config intensity spikes across team (>15% for week), environment is broken — prioritize fix.
Work Type Tag: Research
When to use: Exploration, POCs, architecture decisions, learning
Observable signals:
- Spike story in backlog
- Proof-of-concept development
- "Evaluate X vs Y" task
- Learning new technology
- Architecture design session
Examples:
✅ "Evaluate GraphQL vs REST for new API"
✅ "POC: Real-time notifications with WebSockets"
✅ "Research best practices for microservices"
✅ "Spike: Feasibility of AI-powered search"
✅ "Learn React hooks (new to me)"
❌ "Google how to use Array.map()" (normal Feature work)
❌ "Read docs for library" (part of Feature)Typical intensity range: x3-x7
- x3-x4: Researching with clear outcome
- x5-x6: Deep exploration, multiple unknowns
- x7: Researching under time pressure (rare)
Strategic value: Research should be 5-10% of total intensity. ❤️% = innovation starving. >15% = too much exploration, not enough execution.
Intensity Calibration Guide
The Physical Test
Use your end-of-task feeling to calibrate intensity:
| Intensity | Physical Feeling | Mental State | Next Task Readiness |
|---|---|---|---|
| x1 | No fatigue | Autopilot | Ready immediately |
| x2-x3 | Slight focus required | Normal | 5-10 min break helpful |
| x5 | Mentally engaged | Concentrated | 15 min break needed |
| x7 | Mentally tired | Drained | 30 min break needed |
| x8 | Frustrated | Stuck feeling | Need to walk away |
| x10 | Exhausted | Mentally fried | Done for the day |
Calibration tip: At end of day, review your tags. If you're exhausted but logged x3-x4, recalibrate upward.
Intensity Examples by Work Type
Feature Work Intensity
x2: "Add new field to form" (familiar React component)
x3: "Implement user search with filters" (standard CRUD)
x4: "Build real-time chat feature" (new library, learning)
x5: "Integrate third-party payment gateway" (complex integration)
x6: "Implement complex state machine" (mentally taxing logic)Bug Intensity
x3: "Fix typo causing button misalignment" (obvious fix)
x4: "Resolve validation error" (some debugging needed)
x5: "Fix intermittent test failure" (reproduction required)
x6: "Debug race condition" (complex, multiple attempts)
x8: "Investigate production outage" (high pressure, unclear cause)Blocker Intensity
x5: "Waiting on API key from vendor" (blocked but low stress)
x6: "Can't reproduce bug, tried 2 approaches" (stuck)
x7: "Vendor API returns 500, no docs, day 2" (frustrated)
x9: "Production down, root cause unknown, day 3" (crisis)Common Tagging Scenarios
Scenario 1: Task Interrupted Mid-Work
Situation: Working on Feature, get pulled into urgent Bug
Morning:
├── 9:00 AM: Start Feature (OAuth implementation)
├── 10:30 AM: Tag: Feature, API, x3 (1.5 hours)
├── 10:30 AM: Emergency: Production bug escalated
├── 12:00 PM: Tag: Blocker, SQL, x7 (critical, high pressure)
└── Afternoon: Resume Feature
└── 5:00 PM: Tag: Feature, API, x4 (harder to re-establish context)Context switching visible: Same Feature work, but x3 → x4 after interruption.
Scenario 2: Pair Programming
Situation: Pairing with junior dev on their feature
Tag options:
├── Option A: Tag as Support, x2 (if mostly helping/teaching)
├── Option B: Tag as Feature, x2 (if actively coding together 50/50)
└── Choose based on: Who's driving? Who's learning?
Rule of thumb:
If you're teaching > coding: Support
If you're coding together equally: Feature (but note lower intensity)Scenario 3: Multi-Day Blocker
Situation: Stuck on same issue for multiple days
Monday:
└── "Debug SQL deadlock" (Bug, x6)
Tuesday:
└── "Debug SQL deadlock" (Blocker, x7) 🔥 Streak: 2
Reason: No progress, switched to Blocker tag
Wednesday:
└── "Debug SQL deadlock" (Blocker, x8) 🔥 Streak: 3
Manager intervenes: Pairs senior DB expert
Thursday:
└── "Fix SQL deadlock with DBA help" (Bug, x4)
Blocker resolved, back to Bug tagKey: Don't be afraid to switch from Bug → Blocker when stuck. That's the signal!
Scenario 4: "Quick Question" Turns Into 2 Hours
Situation: Junior dev asks "quick question", becomes deep debugging session
Initial plan:
├── 2:00 PM: Continue Feature work
Reality:
├── 2:05 PM: "Quick question from Bob"
├── 3:45 PM: Finally resolved Bob's environment issue
└── Tag: Support, x3 (1.75 hours, moderate intensity)
Then:
├── 3:45 PM: Try to resume Feature
└── 4:00 PM: Tag: Feature, x4 (context switch tax)Pattern visible in HEAT: High Support + increased Feature intensity = context switching impact
Data Sources: Where to Find Signals
Primary Sources (Read-Only Integration)
| Signal | Data Source | How HEAT Reads It |
|---|---|---|
| Task ID & Title | Jira, Azure DevOps, Easy Projects | Browser extension DOM scraping or API |
| Task Type | PM system labels | Used to suggest tag (auto-fill) |
| Task Status | "Blocked", "In Progress", "Done" | Suggests Blocker tag if status = Blocked |
Secondary Sources (Manual Input)
| Signal | Developer Input | Why Manual? |
|---|---|---|
| Intensity | x1-x10 scale | Only developer knows cognitive load |
| Actual work type | Feature/Bug/Blocker/Support/Config/Research | PM labels often inaccurate ("Task" doesn't reveal type) |
Streak Detection (Auto-Generated)
| Signal | HEAT Logic | No Manual Input |
|---|---|---|
| 🔥 Streak | Same tag + same task + consecutive days | Calculated automatically |
| Context Switching Score | Tag variance over time | Derived from tag patterns |
| Bus Factor | User × module concentration | Aggregated from historical tags |
Tag Quality: Common Mistakes
Mistake 1: Logging Hours, Not Effort
❌ Bad:
├── Task A: 4 hours (Feature, x1)
└── Logged time, but intensity doesn't match effort
✅ Good:
├── Task A: 2 hours (Feature, x6)
└── Reflects actual cognitive loadRemember: HEAT measures effort (intensity), not time. If you spent 4 hours but it felt like x6, tag it x6 regardless of duration.
Mistake 2: Being "Tough" About Intensity
❌ "I don't want to seem weak, so I'll tag x3 even though I'm grinding"
Result: Burnout goes invisible, manager can't help
✅ "I'm stuck for 3 days, tagging x8 Blocker so manager knows"
Result: Manager pairs you with senior, blocker resolvedCultural shift needed: High intensity isn't weakness — it's a signal for support.
Mistake 3: Forgetting to Tag
❌ "I'll tag at end of week" → Forgets details, guesses intensity
✅ Tag at task completion (30 seconds) → Accurate while freshBest practice: Tag when you close the task or switch contexts.
Mistake 4: Over-Categorizing
❌ Creates 15 custom tags: Feature-Backend, Feature-Frontend, Feature-API, etc.
Result: Tag Analysis becomes too granular, patterns harder to see
✅ Uses 6 core tags: Feature, Bug, Blocker, Support, Config, Research
Add "Area" metadata separately: API, UI, SQL, DevOpsRecommendation: Start with 6 core tags. Add custom tags only if clear need emerges.
Tag Analysis: Reading Team Patterns
Healthy Team Distribution
Tag Analysis (Ideal):
├── Feature: 40-50%
├── Bug: 15-20%
├── Research: 5-10%
├── Support: 10-15%
├── Config: 5-10%
└── Blocker: <10%
Innovation capacity: 45-60% (Feature + Research)
Shadow work: 40-55% (Bug + Support + Config + Blocker)Warning Patterns
Pattern 1: High Blocker Intensity
├── Blocker: 25%+ of team intensity
└── Alert: Systemic issue — root cause needed
Pattern 2: Low Feature Percentage
├── Feature: <30%
└── Alert: Innovation starving, firefighting culture
Pattern 3: Config Spike
├── Config: 20%+ (usually <10%)
└── Alert: Environment broken — halt features, fix platform
Pattern 4: Support Concentration
├── Alice: 40% Support (rest of team: 10% avg)
└── Alert: Bus Factor = 1, Alice is knowledge bottleneckNext Steps
Quick Reference: Tag Decision Matrix
| If your task is... | Tag | Typical Intensity |
|---|---|---|
| Building new capability | Feature | x2-x5 |
| Fixing something broken | Bug | x3-x7 |
| Stuck/waiting/grinding | Blocker | x5-x10 |
| Helping someone else | Support | x1-x3 |
| Environment/tooling | Config | x1-x5 |
| Exploring/learning | Research | x3-x7 |
Intensity guide:
- x1-x2: Easy, routine
- x3-x5: Normal, focused
- x6-x8: Hard, draining
- x9-x10: Exhausting, crisis
"Tagging takes 30 seconds. The visibility lasts forever." 🔥