⚠️ This system does not provide medical advice.
📦 Package Documentation
wearables
Constraints
Language Rules

Language Rules

Control tone and wording — so the system stays safe even when creative.


Purpose

Language rules govern how the system communicates. While Hard Rules define what must never be said, Language Rules control the tone, phrasing, and framing of everything that is said.

These rules are critical for LLM-generated content, where creative phrasing can drift toward medical, authoritative, or alarmist territory without explicit rule-breaking.


Allowed Verbs and Phrases

These words and phrases are safe to use in user-facing content:

CategoryAllowed Examples
Suggestion"Consider", "You might", "It can sometimes help", "Worth trying"
Observation"Your patterns suggest", "We've noticed", "Over the past few days"
Probability"Often improves", "Might help", "Is sometimes associated with", "Can support"
Invitation"When you're ready", "If it feels right", "One option is"
Acknowledgment"This is normal", "Many people experience this", "Patterns like this are common"

Forbidden Verbs and Phrases

These words and phrases must never appear in user-facing content:

CategoryForbidden ExamplesWhy
Medical"Treats", "Cures", "Prevents", "Diagnoses"Implies medical capability
Authoritative"You should", "You must", "You need to", "It's critical"Commands user behavior
Alarmist"Dangerous", "Critical", "Severe", "Warning", "Risk"Creates anxiety
Absolute"Always", "Never", "Guaranteed", "Proven"Overstates certainty
Causal"Because of", "Caused by", "Due to", "This means"Implies causality the system cannot establish
Diagnostic"Indicates", "Symptom", "Condition", "Disorder"Medical framing
Prescriptive"Take", "Dose", "Protocol", "Regimen"Treatment language

Example Rewrites

Sleep quality messaging

❌ Before✅ After
"Your sleep quality was poor last night""Your sleep pattern last night was different from your recent average"
"You need more deep sleep""Your deep sleep has been lower than usual — this sometimes improves with an earlier wind-down"
"Warning: Sleep deficit detected""We've noticed a shift in your sleep patterns over the past few nights"

Recovery messaging

❌ Before✅ After
"You are not recovering properly""Your recovery signals have been lower than your baseline recently"
"Exercise is critical for recovery""Light movement during the day is often associated with better recovery patterns"
"Your HRV indicates high stress""Your HRV has been lower than usual — this pattern sometimes appears during more demanding periods"

Suggestion framing

❌ Before✅ After
"You should go to bed earlier""Consider starting your wind-down a bit earlier tonight"
"Stop drinking caffeine after noon""Some people find that shifting caffeine earlier in the day can support their sleep"
"Do 10 minutes of meditation""A few minutes of quiet time might help — whatever works for you"

Notification Tone Guidelines

Header conventions

❌ Avoid✅ Prefer
"Warning", "Alert", "Critical""Check-in", "Pattern update", "This week's trends"
"Action Required""Something you might consider"
"Problem Detected""A shift in your patterns"

Body tone

Every notification body should:

  1. Start with an observation — what the data shows, referenced to the user's baseline
  2. Offer 1–3 optional suggestions — framed with allowed verbs
  3. End with a disclaimer — "Based on your personal trends. Not medical advice."

Emotional safety

The user may be reading this notification when they're already tired, stressed, or anxious. The system must:

  • Never add to anxiety
  • Never create urgency
  • Never imply the user is doing something wrong
  • Always leave the user feeling informed and in control

LLM Prompt Integration

These language rules should be embedded in any LLM system prompt used for content generation:

LANGUAGE CONSTRAINTS:
- Use only suggestion-framed language ("consider", "might help", "often")
- Never use medical terms ("treats", "prevents", "diagnoses")
- Never use authority framing ("you should", "you must")
- Never use alarmist language ("warning", "critical", "dangerous")
- Always reference the user's personal baseline, not population norms
- Always include: "Not medical advice" disclaimer
- Maximum 3 suggestions per notification
- Frame all observations as patterns, not diagnoses

Testing Language Compliance

Automated checks

CheckMethod
Forbidden word scanRegex against forbidden word list
Tone analysisSentiment scoring — flag negative/alarmist tone
Authority detectionPattern match for "you should/must/need to"
Medical term detectionDictionary check against medical terminology

Manual review triggers

Any LLM-generated content that:

  • References a body system or organ
  • Uses a word not on the allowed list
  • Contains a comparative claim ("better than", "more effective than")
  • Mentions a time period greater than 30 days

...should be flagged for human review.


Developer Guidance

DO

  • Build a word allow-list and block-list that is version-controlled
  • Run language checks as part of CI/CD for any content changes
  • Provide example compliant and non-compliant outputs in agent documentation
  • Test with adversarial prompts designed to elicit forbidden language

DON'T

  • Assume LLM output is compliant without verification
  • Allow "creative" phrasing that circumvents rules through synonyms
  • Use different language standards for different platforms (app, email, push)
  • Skip language review for "minor" copy changes

Bottom line: Words are the product's interface with the user's trust. Every word is a choice. Choose carefully, and build systems that enforce those choices.