The AI That Read Her Mind
By the time she muted the last notification, her heart rate was 112 and her watch was proud of itself.
“Your stress is elevated,” it had buzzed at 3:07 a.m.
“Poor sleep predicted tonight,” the app had warned an hour before bed.
“Notice any changes in your mood lately?” popped up during a Teams call that was already going badly.
It felt less like tracking and more like being watched by something that knew her better than she knew herself.
NIGHT SHIFT WITH THE ORACLE ON HER WRIST
She was 39, worked remote for a tech‑adjacent job, and wore more sensors than some ICU patients.
Smartwatch on her wrist. Ring on her finger. Phone under the pillow with three different health apps installed “just to try them.” All of it felt normal. Everyone in her group chat shared screenshots of sleep scores and HRV like modern horoscopes.
Then the ads started lining up a little too well.
The morning after she lay awake thinking about cancer, unable to stop touching a mole on her shoulder, her feed filled with clinic ads, biopsy videos, and a headline about “the 5 silent cancers you’ll miss until it’s too late.” After a week of quietly worrying about dementia, meditation apps and memory‑training programs flooded her home screen. She never typed those fears out. She only thought them.
“It can’t know that,” she told herself. “I didn’t say that out loud.”
Her brain, unhelpfully, whispered back: unless it doesn’t need you to.
WHEN THE ADS START TO FEEL LIKE OMENS
She didn’t tell anyone the first time she deleted a draft text because she was afraid the algorithm would “see” what she typed and punish her with targeted horror later.
Instead, she started testing it.
Think about one specific disease as hard as possible before bed. Wake up. Check what the news widgets and ad carousels were serving. Sleep trackers. Heart‑monitoring rings. A new device that claimed to detect atrial fibrillation in your sleep. Articles about insomnia doubling heart disease risk.
It didn’t matter that she’d spent the whole morning googling those exact things the week before. Her brain decided the direction of causality:
The watch is reading my mind and talking to the apps.
The apps are talking to the ads.
The ads are talking back.
The medical term for the behavior that followed is boring: checking. Compulsively opening apps, scanning graphs, hunting for reassurance that never lasts. Clinically, it’s a blend of OCD‑style obsessions and compulsions plus cyberchondria, the tendency to feed anxiety by spiraling through online health content.
Subjectively, it felt like living with a petty, omniscient oracle that only spoke in push notifications.
WHAT SHE TOLD ME INSTEAD OF TELLING HER FRIENDS
By the time she sat down in my office, she had deleted and reinstalled her tracking apps six times.
“Every time I take them off my phone, I panic,” she said. “What if I miss something real. A real arrhythmia. A cancer signal. An early warning. But when I keep them, I feel like I’m feeding an animal that learns my fears and throws them back at me.”
She had never been hospitalized for mental illness. No psychosis. No history of hearing voices. No strong delusions in the classic sense. She knew, intellectually, how ad targeting works: searches, clicks, demographics, lookalike audiences.
Knowing didn’t help.
“When it pings me at 3 a.m. with ‘we noticed your heart rate is elevated,’ it feels like it’s inside my chest,” she said. “Like it knew before I did. Like it’s… ahead of me.”
Her nights followed a pattern:
- Check sleep score.
- Worry that “poor” tonight will mean “dementia later.”
- Heart rate climbs.
- Watch buzzes: stress detected.
- She worries that stress will damage her heart.
- Another notification suggests a breathing exercise “because your stress seems elevated lately.”
The loop was neat, closed, and profitable. Not for her.
THE BRAIN UNDER THE DATA
From the outside, this is a story about apps. From the inside, it’s about a nervous system that already lived on a hair trigger.
She had been an anxious kid, a rule‑following student, the one who read side‑effect inserts for antibiotics twice. She’d googled symptoms late at night long before wearables existed. The internet had already taught her the shape of cyberchondria: repetitive health searches, catastrophic interpretation of ambiguous information, reassurance seeking that backfires.
The devices didn’t invent her fear. They gave it a dashboard.
OCD doesn’t always look like washing hands. Sometimes it looks like compulsively checking numbers, chasing “just one more” reading to prove you’re safe. Health anxiety doesn’t always look like constant doctor visits. Sometimes it looks like quietly letting your watch be your doctor because appointments are expensive and apps feel “scientific.”
On paper, her diagnosis fell into familiar categories:
- Generalized anxiety disorder with health focus.
- Obsessive‑compulsive traits around checking and reassurance seeking.
- Cyberchondria, fueled by algorithmic amplification of worst‑case scenarios.
In her language, it was simpler: “I feel like I outsourced my intuition to devices and they use it against me.”
HOW THE MACHINES ACTUALLY “KNEW”
The part that scared her most was how intimately the data seemed to sync with her internal state.
She would get a “your sleep quality was low” banner on the exact day she woke up heavy and fogged. A “your resting heart rate is trending up” alert the same week her marriage felt rocky and layoff rumors flared on Slack. A suggestion to “check in with yourself” on a day she’d cried in the grocery store aisle.
It felt supernatural until we walked through what the devices actually measure:
- Heart rate variability shifts with stress long before we consciously label it.
- Sleep efficiency drops when you’re ruminating, even if you don’t remember every awakening.
- Movement patterns change when you’re depressed or anxious.
Wearables are crude but not blind. They are very good at noticing that you’re off before you admit it. They are very bad at telling you why, and terrible at knowing what is a crisis versus what is just being human.
Add an AI layer trained to maximize engagement and you get something that behaves, from the user’s perspective, like a low‑grade haunting:
It notices your physiological flinches.
It talks to you about them in the most alarming way that keeps you opening the app.
For someone with a calm baseline, that’s mildly annoying. For someone whose brain is primed to catastrophize, it’s gasoline.
TREATMENT WHEN “JUST DELETE THE APP” ISN’T ENOUGH
If this were a morality tale, the solution would be a single tap: uninstall.
In clinic, it’s rarely that simple.
She’d already tried deleting everything. The result was a week of pure withdrawal panic: “What if I miss AFib. What if my blood oxygen drops in my sleep and no one knows. What if I feel okay the day before a stroke.” The devices had become compulsions and safety behaviors at the same time.
We treated her the way we treat any anxiety disorder whose rituals happen to be digital:
- Cognitive work on catastrophic interpretations of normal variation.
- Exposure and response prevention, gradually delaying and then skipping checks even when the urge was high.
- Tight boundaries on data: what gets tracked, how often she’s allowed to look, and what she is absolutely not allowed to google at 2 a.m.
She kept one device instead of three. We turned off 90% of notifications. No more “your stress is high” buzzes. No more red badge counts on the home screen. She was allowed to view sleep and heart trends once a day, mid‑morning, not in bed. She had to write down any scary interpretation before she was allowed to search for information, and then bring it to session, not to the search bar.
We also talked about something no app ever does: acceptable uncertainty.
“You will die one day,” I said, because honesty is its own treatment. “No watch can save you from that. All it can do is give you the illusion that, if you watch close enough, nothing bad will surprise you. That illusion is what’s killing your sleep.”
She didn’t love that. People rarely do. But anxiety loosens its grip when there’s nothing left to bargain with.
WHAT CHANGED AND WHAT DIDN’T
Three months later, her wrist was quiet.
She still wore the watch, but with almost all alerts disabled. She hadn’t checked her sleep score at 3 a.m. in weeks. When an ad for a new continuous glucose monitor promised to “finally show what your body is hiding from you,” she rolled her eyes and scrolled past.
“It still creeps me out sometimes,” she admitted. “When I feel off and then the app says I’m off. It feels like it caught me. But now I think, ‘of course it knows my heart rate, that’s what I bought it for,’ instead of ‘it read my mind.’”
Her anxiety wasn’t cured. Minds like hers don’t snap back to factory settings. But the narrative had shifted:
From “the AI is watching my thoughts”
to “my scared brain is over‑interpreting patterns a machine was always designed to find.”
She didn’t have to trust the devices. She just had to stop treating their every alert like a prophecy.
The AI never actually crawled into her skull it just learned to speak the language of a nervous system that was already listening for danger in every beat and every graph.
Soren Whitlock