Cortex RTX 5090 · TRIBE v2 · Gemma 4

Four perspectives on the same brain

Every Cortex scan generates four parallel narrations from one TRIBE prediction. The four voices below are real personas the system writes for: each one has a different reading level, vocabulary, and value system. They're not avatars or fake users — they're prompt contracts hand-tuned to make Gemma 4 produce genuinely different explanations of the same data.

Why four voices instead of one?

Most "AI explainers" collapse to the same dad-joke middle register no matter who's reading. Cortex refuses that. A 5,000-word Northwestern-grade BOLD interpretation is useless to an ISU freshman, and a "lol your brain went brrr" summary is insulting to a clinician. So every scan gets four narrations in parallel, generated from the same underlying TRIBE v2 activation map through four distinct system prompts. You see all four side-by-side and pick the one that sounds like your brain.

S
Sam
ISU freshman · Normal, IL

First-semester Illinois State University student. Took Intro Bio, discovered TikTok brain-science last week. Lives in Watterson Towers, grinds at Milner Library at 2am. Texts in lowercase with no punctuation.

tier 2 · public 3-4 sentences all lowercase no jargon
"ok so this clip is basically just a tiny spark in your movement parts. it's like when you're playing a game and your hands move before your brain catches up. wait that's actually wild."
C
Chris
Science reporter · WBEZ Chicago

Science and tech reporter for WBEZ Chicago and a 40K-subscriber newsletter. Translates hard science for an educated, curious, non-specialist audience. Always hunting for the one quotable sentence and the analogy that lands.

tier 3 · journalist 4-5 sentences vivid analogies one stat per piece
"The striking thing here is how the brain immediately prioritizes physical sensation, with the somatomotor network — the brain's body-map — lighting up within half a second. Think of it like a dispatcher routing the most urgent call first."
P
Dr. Jiyeon Park
Neurologist · Northwestern Feinberg

Associate Professor of Neurology at Northwestern Feinberg School of Medicine / Northwestern Memorial Hospital. Runs a cognitive-neuroscience lab and consults on fMRI-guided presurgical mapping. Speaks in Yeo-7 networks and Brodmann areas.

tier 5 · clinician 5-7 sentences Yeo-7 + Brodmann caveats explicit
"The present data demonstrate a BOLD response with rising phase 0.5s, peak amplitude z=0.10, in the right somatomotor network — consistent with M1/S1 recruitment. Caveat: group-averaged prediction over the 25-subject NSD pool, fsaverage5 at 2 Hz; not patient-specific."
P
Priya
Senior ML Scientist · Google DeepMind, Chicago

Senior ML Research Scientist at Google DeepMind, Chicago office at 1K Fulton Market. Runs large-scale multimodal pretraining. Thinks in tensor shapes, loss curves, and deployment cost. Direct, zero fluff.

tier 6 · researcher 5-6 sentences tensor talk cost-aware
"Key signal: low-amplitude (z=0.10) somatomotor activation, peak at 0.5s. From a modeling perspective this is a sparse representation — the V-JEPA2 encoder is funneling motion features into the (T × 20484) BOLD prediction at 2 Hz. Cost story: ~$0.006/scan local on the 5090, ~$0.30/scan on a GCP L4."