Computational modeling of infant vocal development

We are creating computational models to try to understand the mechanisms underlying human infant vocal motor development. In most cases, the models consist of neural networks that control the muscles in an articulatory synthesizer, which itself models the mechanics of a human vocal tract. The neural networks learn through self-organized learning: they generate spontaneous behavior and learn through Hebbian processes with the learning rate increased or decreased depending on whether or not the models have received reinforcement for the sounds they have produced. As a result of learning, the models demonstrate new vocal skills such as the ability to produce more speech-like sounds and the ability to imitate sounds they hear. Please see these publications for more details:

Warlaumont, A. S., & Finnegan, M. F. (2016). Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural PlasticityPLOS ONE11(1), e0145096.

Warlaumont, A. S., Westermann, G., Buder, E. H., & Oller, D. K. (2013). Prespeech motor learning in a neural network using reinforcementNeural Networks38, 64-95.

Human vocal development in naturalistic settings

Understanding early vocal development requires also studying how real humans behave. Lately, we primarily work with day-long audio recordings of children (using the LENA system). Collecting day-long, longitudinal samples of children's vocalizations has the advantages of capturing the full range of activities and contexts infants experience as well as the full range of sounds they produce. It also allows us to look at how small local effects might add up to create real differences in overall behavior. For example, one question we are have asked is how differences in social interaction as a function of infant behavior can influence children's speech development. Other research focuses on how infant and adult vocalization events are distributed over the course of a day. Working with large, naturalistic datasets poses many technical challenges. A large part of our efforts in this area are therefore focused on identifying appropriate automated analysis methods. Here are a few representative papers from this line of research:

Pagliarini, S., Schneider, S., Kello, C. T., & Warlaumont, A. S. (2022). Low-dimensional representation of infant and adult vocalization acoustics. arXiv:2204.12279

Warlaumont, A. S., Sobowale, K., & Fausey, C. M. (2022). Daylong mobile audio recordings reveal multitimescale dynamics in infants’ vocal productions and auditory experiences. Current Directions in Psychological Science, 31(1), 12–19.

Ritwika, V. P. S., Pretzer, G. M., Mendoza, S., Shedd, C., Kello, C. T., Gopinathan, A., & Warlaumont, A. S. (2020). Exploratory dynamics of vocal foraging during infant-caregiver communication. Scientific Reports, 10, 10469. doi: 10.1038/s41598-020-66778-0

Warlaumont, A. S., Richards., J. A., Gilkerson, J., & Oller, D. K. (2014). A social feedback loop for speech development and its reduction in autismPsychological Science.

Evolution of vocal signals

We are also strongly interested in the evolution of human vocalization, especially from a developmental perspective, for example:

Oller, D. K., Griebel, U., & Warlaumont, A. S. (2016). Vocal development as a guide to modeling the evolution of language. Topics in Cognitive Science, 8, 383–392. doi: 10.1111/tops.12198

Warlaumont, A. S., & Olney, A. M. (2015). Evolution of reflexive signals using a realistic vocal tract modelAdaptive Behavior, 23(4), 183-205.