Beat induction: how the newborn brain finds the pulse
A baby less than a week old, asleep in an incubator, exhibits brain activity that registers a missing downbeat as a violation of expectation. Here is what the beat-induction research has established, why it matters, and what it implies for how rhythm should be taught to adults.
In 2009, a team led by Henkjan Honing at the University of Amsterdam published one of the cleaner findings in the cognitive science of music. They placed sleeping newborn infants — two to three days old — in EEG-monitored cribs, played them simple drumming patterns, and watched the brain signals.
The infants exhibited a specific neural signature, the mismatch negativity, when the downbeat was omitted from an otherwise regular pattern. The brain registered the missing beat as a violation of expectation — even though the missing event was silence, even though the infant had no formal exposure to that rhythm, and even though the infant was asleep [1].
The conclusion is striking: the ability to extract a regular pulse from an auditory stream — what researchers call beat induction — is innate. It is not learned through cultural exposure, it is not the by-product of motor practice, it is not contingent on conscious attention. The brain comes pre-wired to find the pulse.
This single finding has unusually large implications for how we should think about teaching rhythm. This post lays out what the research has established and what it suggests.
The Honing line of evidence
Honing’s lab has spent the last fifteen years building out the empirical case. The 2009 PNAS paper was followed by:
- A 2016 Frontiers in Neuroscience paper measuring neural entrainment to beat and meter in older infants (5–6 months), finding sensitivity that varies with the infant’s musical environment but is present from very early in life [2].
- A 2023 Journal of Neuroscience paper by Edalati and colleagues using high-resolution EEG on premature infants around 32 weeks gestational age — still in the womb, developmentally — and finding that the premature brain already entrains to rhythmic auditory streams at multiple metrical hierarchical levels. The brain enhances neural responses at both beat-level and meter-level frequencies, and the responses are phase-aligned to the auditory envelope [3].
- A 2023 Cognition paper by Háden, Honing and colleagues showing that the newborn beat-detection result cannot be explained by simple statistical learning of transition probabilities — the brain is genuinely doing meter induction, not just pattern repetition tracking [4].
These findings converge on a strong claim: the neural machinery for hearing a beat is one of the earliest-developing features of human auditory cognition, in place before birth and operational in newborns. It is the substrate on which all subsequent rhythmic learning is built.
What beat induction actually is
Beat induction is the perceptual extraction of a regular underlying pulse from a sound stream that may itself contain irregular events. It involves at least four sub-skills, all of which appear to be innate:
- Periodicity detection — recognizing that some events occur at regular intervals.
- Pulse selection — choosing one periodicity (out of many possible) as the perceptual reference.
- Predictive entrainment — generating expectations for when the next pulse should occur.
- Hierarchical metric inference — recognizing that pulses themselves group into longer cycles (downbeats, bars).
The newborn data demonstrates all four. The brain is detecting periodicity (the regular drum hits), selecting a pulse, predicting the next event (the omitted-downbeat MMN signal cannot exist without prediction), and inferring the metric hierarchy (the omission only matters because the brain has identified that specific event as the downbeat) [1:1].
Adults do this constantly and unconsciously. The first time you hear a new song, you find the beat within seconds — usually within a single bar — without any explicit decision-making. This is the same machinery the newborn is using.
Why this matters for how we teach rhythm to adults
Three implications.
First, adult rhythm pedagogy is working with existing machinery, not building from scratch. Many other musical skills — naming intervals, identifying chord qualities, sight-reading — require building cognitive structures that the brain does not come with. Beat induction is different. The structures are already there. What rhythm pedagogy actually does is teach the conscious naming and articulation of percepts the brain is already producing automatically.
This is why a person who has “no sense of rhythm” is almost always wrong about themselves. The basic perceptual machinery is intact; what is missing is the conscious access to it and the motor practice to express it. The Hannon & Trehub developmental work supports this strongly: even adults whose perceptual templates have narrowed culturally can be retrained, given enough exposure, because the underlying machinery is still functional [5].
Second, prediction is the unit of rhythmic perception. The newborn finding points to expectation generation as the central process. Rhythmic engagement — the sense of being “locked in” to a groove — is not passive listening; it is the brain actively predicting what comes next and then registering each event as either a confirmation or a violation of prediction. This is what makes a slightly delayed downbeat (Datseris et al. on swing — see Swing eighths are not 2:1) feel meaningful: it registers as a near-violation of prediction, which intensifies engagement [6].
For pedagogy, this implies that exercises in predictive listening — what will happen on the next beat? — are likely to engage the rhythm-learning system more deeply than purely retrospective ones (what did you just hear?).
Third, the same machinery underlies all metric levels. The premature-infant data shows entrainment at both beat and meter levels [3:1]. A grown listener does not have to “learn” to hear bars after they have learned to hear beats — the same neural process is doing both, hierarchically, from the start. Pedagogy that artificially separates “beat-level” and “meter-level” exercises risks fragmenting a process the brain treats as unitary.
What about the adults who really do struggle?
Beat-deafness (“amusia” in its rhythmic form) does exist — it appears to affect roughly 3% of the population [7]. The condition is real and is distinct from “I haven’t practiced rhythm.” For the other 97% — the vast majority of any ear-training app’s audience — the perceptual machinery is in place from infancy and the relevant question is how to give the conscious mind better access to it.
The most reliable pedagogical lever for this access, supported across the cross-cultural rhythm literature, is vocal articulation of the rhythm. Speaking the rhythm in syllables — takadimi, konnakol, scat, or your own counting system — converts the perceptual signal into a motor act, and the motor act gives the conscious mind a handle on the perception. (See Speak the rhythm before you play it for the cross-cultural convergence on this principle.)
Implications for ear-training apps
Three.
First, listening alone is suboptimal. Beat-induction research suggests that the brain is doing its work whether the listener is engaged or not — but the conscious-skill payoff comes from active prediction and motor articulation. An app that only asks the learner to listen passively is leaving the prediction loop unexercised.
Second, prediction tasks are higher-leverage than identification tasks. A drill that asks what was the meter? exercises retrospective categorization. A drill that asks tap on the next downbeat exercises predictive entrainment. The latter is closer to what beat-induction actually is.
Third, the developmental window argument cuts both ways. Hannon & Trehub showed that adult perceptual narrowing makes culturally-unfamiliar meters harder. But the underlying machinery is intact, which is exactly why the retraining works at all. Pedagogy should be optimistic about adult rhythmic learning while realistic about the time investment for unfamiliar rhythmic systems.
The newborn-EEG data is one of the most counterintuitive and useful findings in the cognitive science of music. The brain you are practicing with already knows how to find a beat. What practice does is give your conscious mind a way to use what your brain has been doing since before you were born.
Related reading
- Takadimi: rhythm syllables as functional rhythm labels
- Konnakol: the South Indian rhythm pedagogy that’s quietly remaking Western drum education
- Why odd meters feel hard (and the trick that makes them feel easy)
- Speak the rhythm before you play it: the cross-cultural convergence
References
Winkler, I., Háden, G. P., Ladinig, O., Sziller, I., & Honing, H. (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471. https://www.pnas.org/doi/abs/10.1073/pnas.0809035106. The original mismatch-negativity-on-omitted-downbeat finding in 2–3-day-old newborns. ↩︎ ↩︎
Cirelli, L. K., Spinelli, C., Nozaradan, S., & Trainor, L. J. (2016). Measuring Neural Entrainment to Beat and Meter in Infants: Effects of Music Background. Frontiers in Neuroscience, 10. https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2016.00229/full. ↩︎
Edalati, M., Mahmoudzadeh, M., Ghostine, G., Kongolo, G., Safaie, J., Wallois, F., & Moghimi, S. (2023). Rhythm in the Premature Neonate Brain: Very Early Processing of Auditory Beat and Meter. Journal of Neuroscience, 43(15), 2794–2802. https://www.jneurosci.org/content/43/15/2794. ↩︎ ↩︎
Háden, G. P., Honing, H., & Winkler, I. (2023). Beat processing in newborn infants cannot be explained by statistical learning based on transition probabilities. Cognition. https://www.sciencedirect.com/science/article/pii/S0010027723003049. Critical follow-up ruling out the simplest alternative explanation. ↩︎
Hannon, E. E., & Trehub, S. E. (2005). Tuning in to musical rhythms: Infants learn more readily than adults. PNAS, 102(35). https://www.pnas.org/content/102/35/12639. See also Why odd meters feel hard. ↩︎
Datseris, G., et al. (2022). Downbeat delays are a key component of swing in jazz. Communications Physics. https://www.nature.com/articles/s42005-022-00995-z. Connects predictive entrainment directly to perceived groove. ↩︎
Phillips-Silver, J., et al. (2011). Born to dance but beat deaf: A new form of congenital amusia. Neuropsychologia, 49(5). The empirical estimate of beat-deafness prevalence in the general population is around 3%. ↩︎