This interdisciplinary project involves development of an algorithm to discriminate babble from other sounds that infants make, in real time, based on the input from an iPad’s inbuilt microphone. BabblePlay is the current app, which rewards the infant with moving images when they produce voiced sounds.
Infant babble (sequences of consonants and vowels, eg, /bababa/) is thought to underpin the development of accurate consonant production. The age at which babble begins and the extent of babble can reliably predict later progress in speech development. BabblePlay responds to an infant’s voiced utterances, but not other vocalizations (squeals, bangs, unvoiced sounds), visually reinforcing naturally occurring babble.
In the future, we plan to develop this as a clinical device for infant populations whose babble and first words are delayed, particularly deaf infants who receive no auditory feedback. Deaf infants whose babble will be visually rewarded may produce a wider range of language sounds and may start producing words earlier.
Members
- Helena Daffern
- Tamar Keren-Portnoy
- Dr Rory DePaolis
- Kenneth Brown
Funding
Parts of this project have so far been funded by:
- C2D2 (Wellcome)
- EPSRC Impact Accelerator Funds and Commercialisation Funds
Website
Research