
(Photo by Andrea Piacquadio from Pexels)
Unlike today’s algorithms that guess songs based on previous enjoyment, this brain-reading approach knows what’s working now.
In A Nutshell
- People with music training and those who describe themselves as emotionally responsive to music benefited most from the brain-reading approach
- Scientists developed earbuds that monitor brain activity in real time to build playlists that maximize musical chills and emotional pleasure
- In a study of 20 listeners, brain-tailored playlists triggered more than twice as many goosebumps compared to playlists designed to reduce pleasure
- The system learns your neural pleasure patterns while you listen, then predicts which songs from thousands of options will resonate most strongly
Ever hit play on a song that gives you goosebumps? That spine-tingling sensation when the music hits just right? Scientists have figured out how to make those moments happen more often, and the technology fits inside your earbuds.
Researchers at Keio University developed a system that reads your brain activity while you listen to music, learns what makes your neurons light up with pleasure, then builds playlists designed to maximize those chills. In a study of 20 people, listeners who got brain-tailored playlists experienced more than twice as many goosebumps compared to playlists designed to reduce emotional response.
The earbuds look ordinary, but tiny electrodes inside monitor electrical signals from your brain while you listen. An algorithm watches for patterns that signal high pleasure versus just casual listening, then uses that information to predict which songs from thousands of options will hit hardest for you specifically.
Your Brain on Music
Music chills activate the same reward circuits triggered by food and sex, which explains why a perfect song can feel almost physical. Of course, what gives one person goosebumps might leave another cold. Musical taste is deeply personal, shaped by everything from childhood memories to cultural background.
Unlike today’s recommendation systems, which predict what you’ll like based on listening history and similar songs, this approach monitors whether your brain is actually experiencing pleasure in the moment. The difference is real-time neural feedback versus educated guesses about past preferences.
To train the system, participants first chose three songs that reliably gave them chills. Then they listened to those tracks plus three chosen by someone else while wearing the special earbuds. They pressed a button whenever goosebumps hit. The brain monitoring revealed clear differences between high-pleasure and low-pleasure listening states.
From there, the algorithm got to work. It analyzed over 7,000 songs from Japanese music charts, predicting which ones would trigger chills based on both acoustic features and real-time brain feedback. Some participants got playlists that updated after every song based on their neural responses. Others got playlists based only on audio characteristics.

The Results
People who heard brain-updated playlists reported more than twice as many chills compared to playlists designed to minimize pleasure. They also rated those experiences as more exciting and absorbing. Brain measurements backed up what people reported feeling.
Musical background mattered too. People with formal music training got more out of the brain-tailored approach, as did those who described themselves as emotionally responsive to music.
The brain-reading earbuds have practical advantages over previous attempts to boost musical pleasure. Some researchers have used brain stimulation techniques or drugs that affect dopamine levels. Both require medical oversight. This system just needs earbuds.
Real-Time Personalization
Current streaming services have gotten sophisticated at predicting what you might like based on years of accumulated data about your listening habits. But they’re still making educated guesses about what you enjoyed in the past. The brain-reading approach knows what’s working for you right now.
After each song, the system retrains itself using the pleasure signals it detected. If your brain responded strongly to a particular tempo or melody pattern, the algorithm adjusts its predictions accordingly. It’s personalization that adapts moment to moment.
There are limitations. The two-electrode setup can’t pinpoint exactly which brain regions are generating the signals. The study relied on people pressing buttons to report chills rather than measuring actual goosebumps with sensors. And brain-measured pleasure levels dropped as listening sessions stretched to 40 minutes, likely from fatigue.
Future versions might incorporate heart rate, skin response, or other physiological signals alongside brain activity. More sensors could mean better predictions, though that would complicate the setup.
The study, published in iScience, demonstrates that your neural responses contain information that standard recommendation algorithms miss. What makes you get chills isn’t just about what songs you’ve liked before or what sounds similar to your favorites. It’s about catching your brain in that moment when music and emotion align perfectly.
The technology isn’t available commercially yet. All the researchers work for VIE, Inc., a company developing music neurotechnology, so consumer products may eventually follow. Until then, you’ll have to rely on the old method: hitting shuffle and hoping for magic.
Paper Notes
Study Limitations
The research did not incorporate objective physiological measurements of chills such as piloerection, heart rate monitoring, or pupil dilation. Chill reports were based entirely on self-reported button presses rather than physiological sensors. The in-ear EEG system used only two electrodes, limiting spatial resolution and preventing source localization to specific brain regions like the nucleus accumbens. The study could not determine whether the EEG signals originated from reward-related areas or other neural sources. Decoded pleasure levels showed a decreasing trend across songs within playlists, possibly reflecting listener fatigue during the 40-minute experimental session. The system requires participants to remain still to minimize movement artifacts in the EEG recording. Musical features like tempo, key changes, and vocal presence showed limited influence on chill responses, though the study did not systematically control for all acoustic variables. The sample consisted of unequal numbers of males and females (7 males, 13 females in the final analysis), preventing analysis of sex-related effects.
Funding and Disclosures
This work was supported by JST COI-NEXT Grant No. JPMJPF2203 to Shinya Fujii and JSPS KAKENHI Grant No. 24KJ1930 to Sotaro Kondoh. The funders had no role in study design, data collection and analysis, decision to publish, or manuscript preparation. All authors associated with this research are employed by VIE, Inc., a company that develops music-related neurotechnology. The authors state this does not alter their adherence to the journal’s policy of sharing data and materials.
Publication Details
Authors: Sotaro Kondoh, Takahide Etani, Yuna Sakakibara, Yasushi Naruse, Yasuhiko Imamura, Takuya Ibaraki, Shinya Fujii | Affiliations: Graduate School of Media and Governance and Faculty of Environment and Information Studies at Keio University (Fujisawa, Japan), Keio University Research Center for Music Science, Keio University Hospital, Keio Research Institute at SFC, and VIE, Inc. (Kamakura, Japan) | Journal: iScience, Volume 29, Article 114508 | Publication Date: January 16, 2026 | DOI: https://doi.org/10.1016/j.isci.2025.114508 | Paper Title: “A chill brain-music interface for enhancing music chills with personalized playlists” | License: Open access article under the CC BY license | Correspondence: [email protected]








Enough … enough, too too much of this tech intrusion. So sick of all this new technology that is just an annoying waste of time. You can’t trust your own feelings and opinions now about what kind of music you like – you need to get a machine to monitor your heart-rate – meanwhile they compile and learn more and more about you so they can turn you into a robot. No thanks.