Consecrated · Sense
The body sings
what the voice
can't say.
Sense is the biometric layer beneath the GASPS scoring engine — heart, breath, sleep, and movement read as evidence. While Sound listens to your voice, Sense listens to the body that makes it. Together they become one tree.
01 / Why Sense
Sound watches the voice. Sense watches the singer. A pitch wobble that comes from fatigue is a different problem than one that comes from technique — and the answer lives in the body.
02 / Sources
Five sensing modalities. One conversion layer.
Every signal will be tagged with a reliability tier and a friction cost. The bridge layer will weight inputs accordingly — never trusting any single sensor more than what it has earned.
The vocal stems your church already records — fed straight to the algorithm.
Most worship rooms run a Dante-networked audio system. Every vocal microphone is already its own digital stream. We build a tiny client that subscribes to those streams and pipes the vocals — by singer, automatically — into the GASPS engine. No microphones to set up. No app to open during the set. The stems were always there; we just listen.
subscribe(channels: 1..16) mux → singerByVoiceFP() emit → /api/voice-analyze
No new gear.
If your room runs Dante (most do), the Sense client subscribes to your existing vocal channels. Zero microphones to add.
No app during the set.
Singers don't open phones during worship. The audio routing is silent — the result lands in their tree after the closing song.
Voice-fingerprint singer ID.
Each mic-channel gets matched to the singer that owns it, automatically. No manual tagging after the fact.
03 / Bridge
Body becomes evidence. Evidence becomes a tree.
Sense doesn't replace the GASPS scorer — it feeds it. Every biometric signal will be normalized into the same RawResponse shape Sound already consumes. Add a new sensor, the rest of the system doesn't move.
{
nodeId: string,
channel: SignalRef,
value: number,
confidence: 0–1,
sphereWeight: SphereVec,
}04 / Spheres
Each family has a body story.
Sense routes biometric signals into the GASPS families they best inform — weighted by what each source can be trusted to know.
Grounding
Apple Watch respiratory rate + phone sway + self-reported vocal warmth.
Alignment
Mostly voice-led. HR variability informs steadiness; phone sway informs stillness.
Support
Where Sense earns its keep. HRV trend + Oura readiness + voice energy slope.
Placement
Voice-led. HR contributes a small tension proxy; phone tracks jaw and head sway.
Style
Where biometrics meet intent. A high HR with steady pitch is mastery, not panic.
05 / Roadmap
Sound first. Sense follows.
Sound · live
Daily check-in, voice check, mentor notes. The full GASPS engine running on three signals.
Available nowSense · biometric layer
Apple Watch, Oura, phone sensors, Polar, self-report. Same scoring engine, deeper evidence.
Real-time
Tension markers on the mentor console mid-rehearsal. Singer's view stays calm.
06 / Privacy & Failure
Local-first. Quiet when sensors drop.
Singer-owned, by default.
Biometric data belongs to the singer. Mentors see what's shared with them. Delete means delete — the row leaves the database, not just the UI.
No PII in the analysis.
GASPS scores carry technical metrics — pitch, breath, HR. No lyrics, no audio recordings beyond the live processing window, no face data, no location beyond what the singer chose to share.
Graceful degradation.
Wearable drops — buffer 30 sec, backfill on reconnect. Network fails — queue offline, sync on reconnect. The session never fails outright.
Get Notified
Be the first when Sense is ready.
Sound is shipping today. Sense follows. Join the waitlist for early access, build updates, and a first look at the biometric layer.