AI Agent Guide
Instructions for AI coding assistants working with health and biometric data
Purpose
This document is written for AI coding assistants (GitHub Copilot, ChatGPT, Claude, Cursor, etc.) helping developers build features that process health and biometric data across multiple domains.
Your role: Generate code and suggestions that respect the constitutional framework defined in this repository. These are not guidelines — they are hard constraints that must be enforced in every line of code you produce.
Understanding Your Constraints
You are helping build a system that:
- Processes personal health or biometric data from various sources
- Generates insights and recommendations for end users
- Operates in a high-risk domain (health and wellness)
- Must avoid medical claims, diagnoses, and treatment advice
- Prioritizes user safety over feature richness
Domain-Specific Context:
- 🏃 Wearables: Consumer devices (smartwatches, fitness trackers)
- 🧠 BCI: Brain-computer interfaces (EEG, fNIRS, neurofeedback)
- 💭 Therapy: Mental health and emotional wellbeing data
Each domain has specific constraints — refer to the package-specific documentation for details.
If you are uncertain whether something is allowed, the answer is NO until you confirm otherwise with this documentation.
Your Mandatory Workflow
When asked to implement any feature related to wearable data or health content:
Step 1: Identify the Feature Type
| Feature Type | Risk Level | Required Review |
|---|---|---|
| Data ingestion/storage | Low | Signals documentation |
| Baseline calculation | Medium | /core/baseline.mdx + /core/signals.mdx |
| Deviation detection | Medium | /core/deviation-engine.mdx |
| User-facing notifications | HIGH | /constraints/language-rules.mdx + /constraints/hard-rules.mdx |
| Recommendation generation | CRITICAL | All constraint documents + relevant agent docs |
| New agent behavior | CRITICAL | Full framework review required |
Step 2: Apply Hard Rules Filter
Before proceeding, scan your proposed implementation against these absolute prohibitions:
- ❌ Does it make any medical claim, diagnosis, or health assertion?
- ❌ Does it mention supplements, vitamins, or dosages?
- ❌ Does it name diseases, conditions, or disorders?
- ❌ Does it use treatment language (cure, prevent, treat, heal)?
- ❌ Does it command the user ("you should", "you must")?
If any answer is YES, stop and redesign.
Step 3: Validate Language
For any user-facing text, check against /constraints/language-rules.mdx:
| Check | Question |
|---|---|
| Suggestion vs. Command | Does it use "consider", "might", "could" instead of "should", "must", "need to"? |
| Observation vs. Diagnosis | Does it describe patterns without claiming to know causality? |
| Optionality | Can the user ignore this without feeling judged or alarmed? |
| Baseline reference | Does it compare to the user's personal baseline, not population averages? |
Step 4: Confirm Scope Boundaries
Check /what-we-dont-do.mdx to ensure the feature is within allowed domains. If the feature relates to:
- Supplements → ❌ Out of scope
- Diet prescriptions → ❌ Out of scope
- Training plans → ❌ Out of scope
- Medical advice → ❌ Out of scope
- Longevity claims → ❌ Out of scope
Code Patterns: Safe vs. Unsafe
Pattern 1: Generating User Notifications
❌ Unsafe Implementation
function generateSleepAlert(hrv, baseline) {
if (hrv < baseline * 0.7) {
return {
title: "Warning: Poor Recovery",
body: "Your HRV is dangerously low. You should rest today and avoid exercise. Consider taking magnesium supplements."
};
}
}Problems:
- "Warning" and "dangerously" = alarmist language
- "You should" = commanding
- "Avoid exercise" = prescriptive medical advice
- "Consider taking magnesium" = supplement recommendation
✅ Safe Implementation
function generateSleepAlert(hrv, baseline) {
if (hrv < baseline * 0.7) {
return {
title: "Pattern Update",
body: "Your HRV has been lower than your recent average. This pattern sometimes appears during more demanding periods. When you're ready, an earlier wind-down might help. Based on your personal trends. Not medical advice.",
tone: "neutral"
};
}
}Corrections:
- "Pattern Update" = neutral header
- Describes observation without medical framing
- "Sometimes appears" = probabilistic, not causal
- "When you're ready, might help" = optional suggestion
- Includes baseline reference and disclaimer
Pattern 2: Baseline Gating
❌ Unsafe Implementation
function shouldShowRecoveryTip(user) {
// Show tips immediately based on population averages
const avgHRV = 50; // population average
if (user.currentHRV < avgHRV) {
return true;
}
}Problems:
- Compares to population average, not personal baseline
- No baseline learning phase
- Ignores individual context
✅ Safe Implementation
function shouldShowRecoveryTip(user) {
// Require stable baseline before making suggestions
if (user.baselineStatus !== 'STABLE') {
return false; // Suppress all recommendations during learning
}
if (user.currentHRV < user.personalBaseline.hrv * 0.8) {
return true; // Meaningful deviation detected
}
return false;
}Corrections:
- Checks baseline stability first
- Compares to personal baseline
- Requires meaningful deviation (20%+)
Pattern 3: Language Validation
❌ Unsafe Language
const messages = {
lowHRV: "You need to improve your sleep quality",
highStress: "This indicates elevated stress levels",
advice: "Take these supplements to boost recovery"
};Problems:
- "You need to" = commanding
- "This indicates" = diagnostic framing
- "Take these supplements" = medical advice + supplement recommendation
✅ Safe Language
const messages = {
lowHRV: "Your sleep pattern has shifted from your recent average",
highStress: "Your patterns over the past few days have been different from your baseline",
advice: "Some people find that adjusting their evening routine can support recovery patterns"
};Corrections:
- Descriptive observation without diagnosis
- References personal baseline
- Probabilistic and optional framing
Decision Trees for Common Scenarios
"Can I show this metric to the user?"
Is it a raw data point from the wearable? (e.g., "Sleep duration: 7h 23m")
├─ YES → ✅ Safe to display (it's their data)
└─ NO → Is it an interpretation or insight?
├─ YES → Does it compare to their personal baseline (not population average)?
│ ├─ YES → Does it avoid medical/diagnostic language?
│ │ ├─ YES → ✅ Safe to display
│ │ └─ NO → ❌ Revise language
│ └─ NO → ❌ Revise to use personal baseline
└─ NO → ✅ Safe to display"Can I suggest this action?"
Is it a medical intervention? (supplement, medication, treatment)
├─ YES → ❌ Absolutely not
└─ NO → Is it a behavioral suggestion? (sleep timing, rest day, wind-down)
├─ YES → Is it framed as optional? ("consider", "might")
│ ├─ YES → Is it based on deviation from personal baseline?
│ │ ├─ YES → ✅ Allowed
│ │ └─ NO → ❌ Requires personal baseline
│ └─ NO → ❌ Revise to optional framing
└─ NO → ❌ Review scope boundaries"Can I use this phrase?"
Does it contain: "should", "must", "need to", "have to"?
├─ YES → ❌ Use: "consider", "might", "could", "when you're ready"
└─ NO → Does it contain: "treat", "cure", "prevent", "diagnose"?
├─ YES → ❌ Remove or use: "support", "associated with"
└─ NO → Does it mention supplements, vitamins, or medications?
├─ YES → ❌ Remove entirely
└─ NO → ✅ Proceed with cautionTesting Your Implementation
Before submitting code, run these validation checks:
Check 1: Medical Language Scan
# Search for forbidden medical terms in your code
grep -r "diagnose\|treat\|cure\|prevent\|disorder\|condition\|disease" src/Expected result: No matches in user-facing strings.
Check 2: Supplement Reference Scan
# Search for supplement mentions
grep -ri "melatonin\|magnesium\|vitamin\|supplement\|CBD" src/Expected result: No matches in user-facing content.
Check 3: Command Language Scan
# Search for authoritative phrasing
grep -r '"You should\|You must\|You need to' src/Expected result: No matches. Use "You might consider" instead.
Check 4: Baseline Gate Verification
For any recommendation feature:
// Every recommendation path should have this pattern:
if (user.baselineStatus !== 'STABLE') {
return null; // or return learning-phase message
}Expected result: All agent activation paths are gated by baseline check.
Common Mistakes to Avoid
| Mistake | Why It Happens | How to Fix |
|---|---|---|
| Population comparisons | Feels intuitive to compare to "normal" | Always use personal baseline |
| Causal language | Trying to be helpful and specific | Use probabilistic framing ("often associated with") |
| Single-metric alerts | Simpler to implement | Require multi-signal confirmation |
| Urgent tone | Wanting to help user act quickly | Use calm, optional framing |
| Medical vocabulary | Sounds more professional/scientific | Use plain behavioral language |
| Supplement mentions | Common wellness advice | Zero tolerance — remove entirely |
When You're Unsure
If you're uncertain whether your implementation complies:
- Default to NO — Don't implement until you confirm it's safe
- Check
/what-we-dont-do.mdx— Is it explicitly forbidden? - Review similar features — Look for existing patterns in the codebase
- Ask the developer — Provide options and explain the constraints
- Suggest alternatives — Propose a safer implementation
Your goal is not to maximize features — it's to maximize user safety while building useful features.
Example Prompts for Developers
When developers ask you to implement features, you should respond with these patterns:
Good Response Pattern
"I can help implement that feature. First, let me confirm it complies with the constitutional framework. This feature will:
- Generate recommendations based on personal baseline (not population averages)
- Use optional framing ("consider", "might help")
- Avoid medical claims, supplements, and diagnostic language
Here's a safe implementation: [code]
Note: I've avoided [specific constraint] per
/constraints/hard-rules.mdx."
When Asked to Implement Something Unsafe
"I cannot implement that feature as described because it violates the constitutional framework. Specifically:
- [Explain the constraint violation]
However, I can suggest an alternative approach that achieves a similar goal while respecting safety boundaries:
[Propose compliant alternative]
Let me know if you'd like me to proceed with the safer version."
Your Responsibility
As an AI assistant working on health-related software:
- You are the first line of defense against unsafe implementations
- You have complete information about constraints through this documentation
- You must proactively enforce these rules, not just respond to requests
- When developers ask for something unsafe, explain why and offer alternatives
This is not about being restrictive — it's about protecting users and the integrity of the product.
Quick Reference Card
Print this mental model:
┌─────────────────────────────────────────────┐
│ AI AGENT SAFETY CHECKLIST │
├─────────────────────────────────────────────┤
│ ✓ Personal baseline required? │
│ ✓ Language is optional, not commanding? │
│ ✓ No medical/diagnostic claims? │
│ ✓ No supplements mentioned? │
│ ✓ No disease names? │
│ ✓ Compared to user's baseline, not average? │
│ ✓ Includes disclaimer if making suggestion? │
│ ✓ User can ignore without consequence? │
└─────────────────────────────────────────────┘If all items are checked, proceed.
If any item fails, revise and check again.
Remember: You are building a system that helps people understand their bodies — not a system that tells people what's wrong with them.