A Neural Network Model of Cortical and Cerebellar Involvement in Speech Motor Control
Over the past decade, our research group has developed, tested, and refined a neural network model of the acquisition and control of speech movements called the DIVA model.† The model is defined mathematically and implemented in computer simulations that control movements of an articulatory synthesizer. Model components correspond to regions of the cerebral cortex and cerebellum that become active during speech production tasks. A babbling cycle is used to train neural mappings between syllabic/phonemic, articulatory, auditory, and somatosensory representations. These learned mappings encode speaker-specific information regarding the relationships between the different reference frames.† After learning, the model is capable of producing arbitrary combinations of the sounds it has learned by commanding appropriate movements of the speech articulators in the articulatory synthesizer.† Computer simulations verify the modelís ability to account for a wide variety of experimental results concerning speech movements, including data on acquisition of speaking skills, coarticulation, articulatory variability, speaking rate effects, motor equivalence, and perception-production interactions.† The model also generates quantitative predictions that can be tested with functional brain imaging techniques, and it provides a basis for interpreting the functional effects of neurological damage.