WO2017031350A1 - Assessing disorders through speech and a computational model - Google Patents
Assessing disorders through speech and a computational model Download PDFInfo
- Publication number
- WO2017031350A1 WO2017031350A1 PCT/US2016/047609 US2016047609W WO2017031350A1 WO 2017031350 A1 WO2017031350 A1 WO 2017031350A1 US 2016047609 W US2016047609 W US 2016047609W WO 2017031350 A1 WO2017031350 A1 WO 2017031350A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- subject
- speech
- parameters
- speech features
- Prior art date
Links
- 238000005094 computer simulation Methods 0.000 title claims abstract description 37
- 230000001537 neural effect Effects 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 54
- 239000000090 biomarker Substances 0.000 claims abstract description 10
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 47
- 230000001755 vocal effect Effects 0.000 claims description 37
- 208000012902 Nervous system disease Diseases 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 8
- 208000018737 Parkinson disease Diseases 0.000 claims description 7
- 208000030886 Traumatic Brain injury Diseases 0.000 claims description 7
- 230000009529 traumatic brain injury Effects 0.000 claims description 7
- 230000001149 cognitive effect Effects 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000003387 muscular Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims 2
- 208000035475 disorder Diseases 0.000 description 38
- 210000003205 muscle Anatomy 0.000 description 24
- 230000008569 process Effects 0.000 description 23
- 230000004913 activation Effects 0.000 description 20
- 238000001994 activation Methods 0.000 description 20
- 210000004556 brain Anatomy 0.000 description 17
- 238000004519 manufacturing process Methods 0.000 description 16
- 238000013459 approach Methods 0.000 description 12
- 238000013507 mapping Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000003238 somatosensory effect Effects 0.000 description 10
- 206010002026 amyotrophic lateral sclerosis Diseases 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 210000004717 laryngeal muscle Anatomy 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 230000036403 neuro physiology Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 230000000926 neurological effect Effects 0.000 description 5
- 208000027765 speech disease Diseases 0.000 description 5
- 210000000867 larynx Anatomy 0.000 description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 208000024714 major depressive disease Diseases 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 201000006417 multiple sclerosis Diseases 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000011282 treatment Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000002131 composite material Substances 0.000 description 3
- 238000010219 correlation analysis Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003767 neural control Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 206010012289 Dementia Diseases 0.000 description 2
- 208000025966 Neurological disease Diseases 0.000 description 2
- 208000013200 Stress disease Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 238000000537 electroencephalography Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 230000008449 language Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000013488 ordinary least square regression Methods 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 208000019116 sleep disease Diseases 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 210000001186 vagus nerve Anatomy 0.000 description 2
- 208000024827 Alzheimer disease Diseases 0.000 description 1
- 206010003805 Autism Diseases 0.000 description 1
- 208000020706 Autistic disease Diseases 0.000 description 1
- 241001573498 Compacta Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 208000013140 Sensorimotor disease Diseases 0.000 description 1
- 208000030137 articulation disease Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000006998 cognitive state Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009699 differential effect Effects 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000012377 drug delivery Methods 0.000 description 1
- 230000004064 dysfunction Effects 0.000 description 1
- 238000002567 electromyography Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000000126 in silico method Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 208000011977 language disease Diseases 0.000 description 1
- 238000002582 magnetoencephalography Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007595 memory recall Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000001095 motoneuron effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000004751 neurological system process Effects 0.000 description 1
- 239000002858 neurotransmitter agent Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000008433 psychological processes and functions Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 210000003019 respiratory muscle Anatomy 0.000 description 1
- 238000009531 respiratory rate measurement Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 201000000980 schizophrenia Diseases 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000036981 sensorimotor activity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 210000003523 substantia nigra Anatomy 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 230000000472 traumatic effect Effects 0.000 description 1
- 238000009424 underpinning Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 210000001260 vocal cord Anatomy 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H70/00—ICT specially adapted for the handling or processing of medical references
- G16H70/60—ICT specially adapted for the handling or processing of medical references relating to pathologies
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Definitions
- speech features are extracted from data from a subject.
- the extracted speech features are compared to predicted speech features from a neurophysiological computational model to obtain an error signal.
- the error signal is inverted through an inverse of the neurophysiological computational model to update internal parameters of the neurophysiological computational model.
- the updated internal parameters may be applied to the model in multiple iterations of the steps of comparing, inverting and applying to obtain updated control parameters to be processed.
- the process may additionally include further feature extraction from the updated control parameters. For example, a correlation structure for the subject may be extracted from the updated control parameters.
- the extracted speech features and the predicted speech features may include representations of vocal source, representations of vocal tract and time series representations.
- the subject may be accessed, for example, for a neurological disorder such as for depression, Parkinson's disease, traumatic brain injury, or the effects of cognitive stress.
- the neurophysiological computational model may be a muscular control model and the internal parameters may be control parameters.
- a speech-related signal of the subject is received and speech features are extracted from the speech related signal.
- the extracted speech features are compared to predicted speech features from a control model to obtain an error signal.
- the error signal is inverted through an inverse of the model to update all parameters of the model.
- the updated control parameters are processed in a comparison with parameters associated with the disorder in a library and the condition of the subject is assessed based on that comparison.
- Figure 1 illustrates a prior art DIVA neural computational control model which may be used in embodiments of the present invention.
- Figure 2 is a block diagram of an embodiment of the invention.
- Figure 3 is an illustration of the inverse neural computational modeling and library of Fig. 2 in one embodiment of the invention.
- Figure 4 is a flowchart illustrating operation of one embodiment of the invention using the system of Figs. 2 and 3.
- Fig. 5 is an alternative flowchart similar to Fig. 4 additionally including a statistical analysis of the updated control parameters.
- Fig. 6 is an illustration of an alternative to the DIVA model for use in an alternative embodiment of the invention.
- Figure 7 is a numerical visualization of the mapping function of the forward model of Fig. 6.
- a computational model is a set of parameterized equations that describe the working of a system (speech production, general motor control, knowledge acquisition, decision making, memory recall, performance).
- the parameters from a set of such equations are the new feature space for traditional data analysis tasks such as diagnosing or predicting severity of a disorder.
- a distinguishing characteristic of the invention is the use of a neurophysiological computational model.
- a neurophysiological computational model is a specific instance of a computational model whose equations are anchored in the known or hypothesized neurophysiology of the subject.
- the computational model may describe control of muscles, such as used in speech, by the brain.
- the mathematical components of a neurophysiological computational model may have hypothesized locations in the brain, and associated speech production-related, speech perception-related system regions, and the mathematical relationships between these components reflect biological
- the parameters in such a neurophysiological computational model have a concrete basis in the brain and components controlled by the brain. Because the model parameters have a definite neurobiological basis, we recognize that these parameters may be useful for assessing neurophysiological disorders. This invention's contribution is to recognize the utility offered by these parameters, which cannot be directly measured, and to use these parameters within a system for assessing
- Model-based feature extraction serves several purposes: (1) It can provide greater insight into the system under investigation. Superior insight can enable targeted treatments, or approaches to task performance (e.g., methods of effectively training an individual, treating different aspects of a disorder, monitoring the effectiveness of an intervention) (2) It can support simulation of interventions and performance/ risk reduction. Multiple courses of action (drug delivery, behavioral intervention, material presentation) can be tested in silico and the optimal one selected for an individual (3) It can provide an improved understanding of the system itself. This can lead to better models and the benefits of better models. (4) It can support individualization/personalization.
- a broad set of equations can be fit to a single subject in such a way that knowledge from many individuals inform the structure of the equations and knowledge of the individual tailors the equations to be person specific. Instead of one-size-fits all approaches to a person, interventions will be personalized.
- model based approach fuses prior knowledge of the system with the input data. Fusion of a model with input data can constrain the space of predictions to realizable possibilities. Consequently, performance with model-based features may lead to better estimations and predictions than when only the raw data is used. Therefore, model-based features have a better probability of generalizing to different circumstances. Critically, the estimation of model parameters provides a window into the system under investigation without costly, invasive, or difficult to perform procedures. Model based approaches uniquely describe the inner workings of a subject in order for appropriate, personalized, optimal actions to be taken.
- DIVA Directions into Velocities of Articulators
- the DIVA system has been used to study speech generation, including speech disorders, as DIVA relates to the structure and function of the brain.
- the DIVA model is a magnetic resonance imaging (MRI)-validated neurocomputational model of speech production developed out of Boston University by Frank Guenther and colleagues [26].
- the primary aim of the DIVA model was to provide a neurobiologically plausible, quantitative, system level description of low level speech motor control.
- the neurobiological plausibility follows from the validation of different aspects of the model correlating with different sensorimotor speech production and perception experiments conducted with functional MRI (fMRI).
- DIVA is a quantitative model as it consists of a series of equations that describe the inputs, outputs, and processing of each functional block. DIVA models the brain at the systems level.
- DIVA neurotransmitter binding kinetics or neuronal spike timing dynamics.
- DIVA has been used almost exclusively for explaining principles of low level speech motor control. DIVA hypothesizes that the brain has auditory and somatosensory targets when producing speech, and that there are feedforward and feedback mechanisms, embodied by DIVA's equations, that govern the movement of the speech articulators associated with the vocal tract (e.g., lips, tongue, and jaw).
- feedforward and feedback mechanisms embodied by DIVA's equations, that govern the movement of the speech articulators associated with the vocal tract (e.g., lips, tongue, and jaw).
- Figure 1 illustrates the DIVA model.
- a forward model represents the transformation from muscle to acoustic and somatosensory space during speech based on the collection of experimental data from human subjects performing the speech task.
- the DIVA system monitors the performance of speech
- the target formants resulting from actual speech of the subject are compared to the formant's predicted by the model to produce an error 105 that is applied back through an inverse model 107.
- the DIVA model also incorporates somatosensation as an output that can be compared to target somatosensation to produce an error 109 that is returned through an inverse model 111.
- the processed errors are combined to provide vocal tract feedback that modifies the motor plan, that is control parameters to the forward model.
- the control parameters are modified in a loop to reduce the error from the target signals. After multiple iterations, the updated control parameters define the control to the forward model that generates the target speech, allowing for study of speech generation.
- control parameters of DIVA represent a mapping from a low-dimensional feature space to a high-dimensional space that captures the underlying neural process of speech production, and thus is a first-of- its-kind speech feature representation to be used in assessing speech disorders.
- This novel feature representation with the possibility of additional features extracted from the initial novel feature representation (e.g., the coordination characteristics of these features), provides the basis for a unique set of classifiers of condition state and predictors of condition severity.
- control parameter is an umbrella term for all numerical values (single values or timeseries) that are input to the system, derived through intermediate processing steps, and output from the system.
- control parameters of the speech control model include but are not limited to articulatory trajectories, neural muscle activations, acoustic signals, neural synaptic weights, model delays, model gains, and neural firing rates throughout the speech production-speech perception control loop.
- speech-related signal as an umbrella term for all biological signals related to speech. Speech-related signals include but are not limited to the acoustic speech waveform, non-acoustic signals such as accelerometer measures of the vocal source, facial expressions during speech production, neural activity during speech production such as that measured by magnetic resonance imaging,
- tomography articulator positions such as those measured by electroarticulography, electromyography of the body, and respiratory measurements.
- neural feedforward articulatory commands and auxiliary source variables but it is also applicable to neural features derived from auditory and somatosensory feedback errors.
- the approach has been applied to prediction of the severity of Parkinson's disease and major depressive disorder, but it is applicable to any condition that can be modeled with neural computational structures. Examples include traumatic brain injury (TBI), Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig's Disease, Multiple Sclerosis (MS), and dementia, as well as effects due to physical and cognitive stress and sleep disorders, all of which show voice aberrations. Tele-monitoring of treatment for such conditions and detection of early stages in the disease are also an important application area. In addition to the broad sensitivity of neurocomputational model based features, these features may offer greater specificity and consequently allow discrimination between different neurological disorders using an inferred brain basis.
- a protocol is established at 201.
- the protocol is tuned to cognitive and motor articulatory and laryngeal declines of interest. It may be based on free speech or any one of many known protocols including reading passages such as the Rainbow passage [29], the Caterpillar passage [28], the Grandfather passage [30, 31] or the North Wind [32] passage.
- data is collected at 203 from many subjects including those with disorders of interest. Example disorders include Parkinson's disease, depression, ALS, TBI, Alzheimer's, autism and schizophrenia. From the collected data, features of the speech are extracted and used as auditory targets.
- the extracted data is applied to a neurophysiological computational model that performs an inverse operation to go from speech features to internal model parameters 207.
- the model may, for example, be DIVA model.
- Data developed in the model 207 results in a higher dimensional representation than the data directly extracted from the speech. That data may be converted to yet higher dimensions through a relational features process 209 which may, for example, be based on correlation or coherence or other statistical or deterministic
- GMM Gaussian Mixture Model
- SVM Support Vector Machines
- DNN Deep Neural Network
- data is collected at 203 using the same protocol 201 to extract the target data at 205 to be applied to the neural computational model 207.
- the data derived from that model is similarly converted at 209 if the library at 211 is based on such converted data.
- the data of the test individual is then compared to the stored data for individuals of known disorders according to the appropriate model at 211 to obtain a prediction of a disorder for the individual at 213.
- the output from 213 may include a probability that the individual suffers the disorder and/or a prediction of the severity of the disorder.
- the method for estimation of neurological disorders from vocal biomarkers begins with collection of speech data from subjects.
- the ultimate goal of any estimation method is to determine Z using X.
- a key step in our approach is to infer and leverage the intermediate values ⁇ to estimate Z.
- neurophysiologically constrained space result in additional information about the subject in ⁇ than can be obtained with the speech related features alone.
- M may be a Gaussian mixture model, a deep neural network, a support vector machine [33], or a random forest to name a few examples.
- M learns a relationship between the features (0 k , y k , z) and the disorder. The process of learning the relationship between features and disorders is called training.
- M takes a tuple of (0;, Yj) for subject / ' , with unknown neurophysiology z, and predicts the disorder, z.
- z we distinguish z from z by noting z as the true disorder and z as the estimate.
- the process of predicting an unknown disorder from features is called testing.
- the algorithm estimates the latent parameters, ⁇ ;, and performs processing to create additional patterns, ⁇ ;.
- the tuple(0j,yj) is then processed by the machine learning algorithm M to return an estimate of the neurological disorder or disorder severity z.
- the feature tuple ( ⁇ , ⁇ ) may contain either ⁇ or ⁇ or both values.
- DIVA Directions into Velocities of Articulators
- the internal variables will reflect the type and/or severity of the disorder. More generally, sensorimotor disorders will be reflected in changes to neural organizational principles of this type [10][11]. Using these internal "brain state" variables may offer greater specificity than non-neurocomputationally based features that do not attempt to model the underlying neuropathophysiology.
- FIG. 3 The mathematical framework of one embodiment is in Figure 3.
- a neurophysiological model is parameterized by a set of control variables 301 represented as ⁇ . These parameters are input to the neurophysiological computational model 303 which generates a predicted time series 305. The predicted time series is compared against measured values 307 from the subject, and the difference 309 is used in an inverse model 311 to update the model's estimates of the internal parameters 313.
- model parameter space correlation analysis, coherence, multivariate regression
- the measured data 307 for subjects of known disorder to collect data for a prediction library. Then, like data would be extracted for an individual being assessed and that data would be compared to the library.
- the extracted data is applied through a loop including the inverse model 311 and forward model 303 to develop model control parameters data over multiple iterations of the loop. It is those control parameters, or data converted from those control parameters, that are then stored in a library.
- the individual's speech-related data is cycled through this process to develop parameters for that individual that can be compared to the stored data in the library.
- FIG. 4 Flowcharts for processing of the data in the system of Fig. 3 are presented in Fig. 4 and, for the case of additional statistical analysis on the control parameters, in Fig. 5.
- the neurophysiological control model 303 is first initialized by target control parameters ⁇ 401. Speech is recorded, for example at a clinical, home or mobile source, at 403 and speech features used in the testing are extracted at 405. A recorded speech may be from subjects of known disorder in order to generate data to be stored in the library, or it may be from a test subject to be analyzed based on the stored data.
- the extracted speech features from the individual are compared to that generated in the neurophysiological control model at 407 to generate the error signal 309.
- the error is then inverted in the neurophysiological model 311 to update the control parameters at 313. If the system has not yet cycled through a required number of iterations, or met a minimum error criteria, it returns to apply the updated control parameters to the neurophysical control model through the loop 409. Once the control parameters have stabilized to the appropriate degree at 411, the system proceeds to 413. From here, during a training mode, the updated control parameters are stored with the known disorder in the control parameter library. On the other hand, during a test sequence for an individual, the updated control parameters for the individual are compared to those in the library 211 at 415 using the appropriate pattern matching model.
- the process is as illustrated in Fig. 5.
- the process is substantially the same as that in Fig. 4 except that an additional analysis, for example to extract correlation structure from the updated control parameters, is provided at 501.
- the library to 21 IB then stores that correlation structure data and likely also the original control parameters along with the disorder.
- the correlation structure features are derived from the DIVA model's 13 time-varying (feedforward) position states, which are sampled at 200 Hz.
- the input in this embodiment is 3 formant frequencies and the fundamental frequency.
- the formant tracks were extracted using a Kalman
- the model parameters that get updated are initialized to "neutral" configurations with respect to the model.
- the vocal tract parameters (1-10 of the 13 dimensional space) are set to a neutral/open configuration of zero.
- Dimensions 11, 12, and 13 are the fundamental frequency, source air pressure, and voicing indicator. Fundamental frequency is initialized to 0.0, source air pressure is set to -0.5, and voicing is initialized to 0.0.
- Software for the DIVA component was downloaded from [25] and modified to fit in Fig 4.
- the DIVA auditory targets are zones centered around the true target.
- the error signal is modulated depending on whether and how far the sensed production falls outside the auditory target zone.
- the fundamental frequency and formants are set to 90% and 110% of the participant specific extracted values.
- Praat parameters included a 1 ms time step, the autocorrelation method of fundamental frequency determination, and a minimum and maximum range of 75 and 600 Hz. If a fundamental frequency was not detected, the minimum and maximum ranges were set to 1 Hz and 1000 Hz respectively.
- the somatosensory targets are also specified in terms of zones.
- the minimum and maximum placement of articulators for six of the eight somatosensory signals are initialized to -1 and -0.25.
- the somatosensory targets for pressure and voicing minimum and maximum are initialized to 0.75 and 1.0.
- This process represents a 3-to-13 expansion to a higher-level neural space.
- Articulatory states comprise positions and velocities of articulators such as of the tongue, jaw, lips, and larynx.
- These features were further processed using an advanced form of intra and interfeature correlation analysis, but they can be processed by any desired machine learning framework to extract "features of features" [16].
- sensorimotor transformation step uses a neurobiophysical component as part of the sensorimotor transformation.
- the vocal source controls a person's fundamental frequency, perceived as pitch, through the interaction of several muscles.
- Two key muscles in the modulation process are the cricothyroid and thyroarytenoid muscles. These two muscles are controlled by the brain through branches of the tenth cranial nerve, the vagus nerve.
- aTH the neural control of the thyroarytenoid muscle
- aCT the neural control signal of the cricothyroid muscle
- the model in Fig. 6 is used in the diagrams of Figs. 3-5.
- the forward model is of control of the vocal source exclusively and in high detail as opposed to an emphasis on the vocal tract in Fig. 1.
- the mapping function of the forward model 601 can be seen in the numerical visualization of Fig. 7.
- the visualization is of the true neurobiophysical mapping function (circles) and quadratic approximation (X's).
- Truth values were obtained from fundamental frequency estimates of the glottal waveform, and approximation values were generated from the second order polynomial fit with ordinary least squares to the true data.
- the glottal waveform was simulated offline for a canonical speaker in order to create the truth data for the mapping function.
- the forward model 601 drives the auditory system
- control parameters are the muscle control signals aTH and aCT.
- the somatosensory feedback was not considered necessary, but could be added in future realizations.
- a key, novel component of this embodiment is a biophysically inspired model of the two major muscles and the activations associated with them in the vocal source.
- We develop this relationship by building upon a prior biophysical model of the vocal source [18-20].
- we quantify the relationship between the two muscle activations and the vocal source by fitting a second order, two-dimensional polynomial with cross terms to data generated from a model of the vocal source.
- the model of the vocal source is an ordinary differential equation of a canonical larynx. Two input parameters to the model are muscles activations.
- the output of the model is a glottal waveform (the puffs of air that exit the vocal folds before being shaped into speech by the vocal tract).
- the frequency of the puffs of air is the fundamental frequency of the ultimately produced speech waveform.
- each person's fundamental frequency trajectory we normalize by adding or subtracting an offset, if necessary.
- This normalization step changes the absolute value of the person's fundamental frequency trajectory, and it is only applied under certain conditions.
- the step places the person's fundamental frequency within the fundamental frequency bounds that would be appropriate for our derived functional relationship between muscle activation and fundamental frequency.
- the multiplication of the pseudoinverse by the error signal results in an update command.
- the motor feedback command is delayed by a suitable portion of time to represent the neurophysiological auditory delay in humans. Additionally, the motor update command is scaled by a feedback gain factor before being added to the current feedforward motor gain.
- the delay, n d is introduced by computing an error update at time, n, using the produced signal, y [n— n d ] as opposed to y [n] .
- the gains are always both positive, sum to one, and are chosen empirically for system performance.
- the gains and delays are kept constant for all subjects but, like the aCT and aTH values, could also be learned in other embodiments.
- the new, composite motor plan is then used to generate a new fundamental frequency value.
- the new fundamental frequency value is compared to the auditory target for the timestep, an error signal is generated, the sensorimotor transformation takes place, and a new motor feedback update is generated.
- This process continues for each time sample in the fundamental frequency target derived from a person's speech.
- all of the motor updates from the error signals are used to modify the feedforward neural plan in the model's brain.
- the updated feedforward neural plan is then used, and the whole process is repeated.
- the auditory signal generated using the iteratively updated neural plan will converge or the iterative process will be stopped after a fixed number of iterations.
- the estimated neural aCT and aTH values are taken from the model and used in the next phase of the process.
- the two time series inferred, the aCT and aTH neural activations are used in a further estimation step to extract additional patterns for the subject under evaluation.
- Each of these time series individually may contain information that is representative of a neurological disorder.
- another novel aspect of this invention is the recognition that the coordination between the time series is crucial for estimating informative features. No prior work has used the coordination between neural signals as a defining contribution to their methodology. Though we used the coordination in this specific embodiment, other relations within and between the time series may also be informative and provide complimentary information.
- Pattern estimation per person is one step of the process.
- the full system requires estimating these patterns for many subjects who are known to have a disorder to create a library of patterns. Then, a new subject with an unknown disorder is analyzed to create their pattern. The new subject's pattern is compared against the library of patterns using the machine learning technique Extremely Randomized Trees.
- Other machine learning techniques such as support vector machines or Gaussian Mixture Models [33] are alternative or even complimentary algorithms to the Extremely Randomized Tree algorithm [21].
- the general space covers vocal, biomechanical, kinematic, and dynamic models of all aspects of speech at multiple time scales from phones to sentences.
- our forward model for example, DIVA
- our forward model employs the physical constraints governing joint articulator movements (and others, such as laryngeal and respiratory muscles), so this provides benefit when the inverse mapping from measurement space (for example, audio features) back into the model state space is performed.
- measurement space for example, audio features
- the general concept for this application is that modeling the control parameters of a physical system rather than just the output will provide greater benefit (i.e. lower error).
- individualization occurs because we are reproducing a given speaker's formant trajectories and other speech features for an utterance rather than one generic set of formant tracks for the utterance.
- Other forms of individualization could incorporate additional information about their medical history or demographics (age, gender, height, weight), structural information derived from MRI scans of the brain, articulators, and larynx, and current or past medication consumption, and smoking history.
- the neurophysiological model takes place in a neurobiologically plausible manner. Consequently, the estimated patterns of activity may be more accurate and less prone to artifacts than methods that take a purely statistical approach to parameter estimation.
- the neurophysiological model provides a direct avenue for assessment of neurobiological structures that are known to be directly involved in different disorders. For example, Parkinson's disease is characterized by degradation of the substantia nigra pars compacta, a specific region of neural tissue deep in the brain. In a biophysical model that is limited to only considering the movement of the laryngeal muscles or the speech articulators, patterns that are estimated from these components of speech system may be noisy or non-specific.
- a neurophysiological model By estimating level of activity in key brain areas through the effect of these areas on the core speech network and in turn on the laryngeal muscles and articulators, a neurophysiological model can be both more sensitive to the presence of a disorder and more discriminating between different disorders as different disorders may have differential effects on neuronal regions.
- Example conditions include Major Depressive Disorder
- MBD Parkinson's
- speech, language, and articulation disorders Other applications in which neuro-motor and neuro-physiological coordination can break down include early onset detection of traumatic brain injury (TBI), Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig's Disease, dementia
- TBI traumatic brain injury
- ALS Amyotrophic lateral sclerosis
- Lou Gehrig's Disease dementia
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- Data Mining & Analysis (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Psychiatry (AREA)
- Databases & Information Systems (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Educational Technology (AREA)
- Computational Linguistics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
In a system and method for assessing the condition of a subject, control parameters are derived from a neurophysiological computational model that operates on features extracted from a speech signal. The control parameters are used as biomarkers (indicators) of the subject's condition. Speech related features are compared with model predicted speech features, and the error signal is used to update control parameters within the neurophysiological computational model. The updated control parameters are processed in a comparison with parameters associated with the disorder in a library.
Description
ASSESSING DISORDERS THROUGH SPEECH AND A COMPUTATIONAL
MODEL
RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application
62/214,755 filed September 4, 2015 and U.S. Provisional Application 62/207,259 filed on August 19, 2015.
[0002] The entire teachings of the above applications are incorporated herein by reference.
BACKGROUND
[0003] Motivations for this invention are the many neurological, trauma, and cognitive stress conditions that can alter motor control and cognitive state and therefore alter speech production and other sensorimotor activities by influencing the characteristics of the vocal source, tract, and prosodies, as well as other motor components. Early, accurate detection of such conditions aid in possible intervention and rehabilitation. Thus, simple noninvasive biomarkers are desired for determining severity. Recently, there have been significant efforts in the use of vocal biomarkers [3]-[9]. Features may be extracted from speech and compared to a library of such features for various disorders to diagnose or predict severity of the disorder.
SUMMARY OF THE INVENTION
[0004] In an automated system for assessing a condition of a subject, speech features are extracted from data from a subject. The extracted speech features are compared to predicted speech features from a neurophysiological computational model to obtain an error signal. The error signal is inverted through an inverse of the neurophysiological computational model to update internal parameters of the neurophysiological computational model.
[0005] The updated internal parameters may be applied to the model in multiple iterations of the steps of comparing, inverting and applying to obtain
updated control parameters to be processed. The process may additionally include further feature extraction from the updated control parameters. For example, a correlation structure for the subject may be extracted from the updated control parameters.
[0006] The extracted speech features and the predicted speech features may include representations of vocal source, representations of vocal tract and time series representations.
[0007] The subject may be accessed, for example, for a neurological disorder such as for depression, Parkinson's disease, traumatic brain injury, or the effects of cognitive stress. The neurophysiological computational model may be a muscular control model and the internal parameters may be control parameters.
[0008] Otherwise stated, in an automated system for assessing a condition of a subject, a speech-related signal of the subject is received and speech features are extracted from the speech related signal. The extracted speech features are compared to predicted speech features from a control model to obtain an error signal. The error signal is inverted through an inverse of the model to update all parameters of the model. The updated control parameters are processed in a comparison with parameters associated with the disorder in a library and the condition of the subject is assessed based on that comparison.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Figure 1 illustrates a prior art DIVA neural computational control model which may be used in embodiments of the present invention.
[0010] Figure 2 is a block diagram of an embodiment of the invention.
[0011] Figure 3 is an illustration of the inverse neural computational modeling and library of Fig. 2 in one embodiment of the invention.
[0012] Figure 4 is a flowchart illustrating operation of one embodiment of the invention using the system of Figs. 2 and 3.
[0013] Fig. 5 is an alternative flowchart similar to Fig. 4 additionally including a statistical analysis of the updated control parameters.
[0014] Fig. 6 is an illustration of an alternative to the DIVA model for use in an alternative embodiment of the invention.
[0015] Figure 7 is a numerical visualization of the mapping function of the forward model of Fig. 6.
DETAILED DESCRIPTION OF THE INVENTION
[0016] A description of example embodiments of the invention follows.
[0017] A computational model is a set of parameterized equations that describe the working of a system (speech production, general motor control, knowledge acquisition, decision making, memory recall, performance). The parameters from a set of such equations are the new feature space for traditional data analysis tasks such as diagnosing or predicting severity of a disorder.
[0018] A distinguishing characteristic of the invention is the use of a neurophysiological computational model. A neurophysiological computational model is a specific instance of a computational model whose equations are anchored in the known or hypothesized neurophysiology of the subject. In particular, the computational model may describe control of muscles, such as used in speech, by the brain. For example, the mathematical components of a neurophysiological computational model may have hypothesized locations in the brain, and associated speech production-related, speech perception-related system regions, and the mathematical relationships between these components reflect biological
relationships and interactions between the components. Consequently, the parameters in such a neurophysiological computational model have a concrete basis in the brain and components controlled by the brain. Because the model parameters have a definite neurobiological basis, we recognize that these parameters may be useful for assessing neurophysiological disorders. This invention's contribution is to recognize the utility offered by these parameters, which cannot be directly measured, and to use these parameters within a system for assessing
neurophysiological health. The neurophysiological computational model in this invention engages the speech production, speech perception control loop, and demonstrates how speech can be used to estimate the unobserved model parameters.
[0019] Model-based feature extraction serves several purposes: (1) It can provide greater insight into the system under investigation. Superior insight can enable targeted treatments, or approaches to task performance (e.g., methods of effectively training an individual, treating different aspects of a disorder, monitoring the effectiveness of an intervention) (2) It can support simulation of interventions and performance/ risk reduction. Multiple courses of action (drug delivery, behavioral intervention, material presentation) can be tested in silico and the optimal one selected for an individual (3) It can provide an improved understanding of the system itself. This can lead to better models and the benefits of better models. (4) It can support individualization/personalization. A broad set of equations can be fit to a single subject in such a way that knowledge from many individuals inform the structure of the equations and knowledge of the individual tailors the equations to be person specific. Instead of one-size-fits all approaches to a person, interventions will be personalized.
[0020] Traditional approaches to feature extraction operate only on input data
(e.g., speech, accelerometer, EEG signals, voxel based activations in brain activity). A model based approach fuses prior knowledge of the system with the input data. Fusion of a model with input data can constrain the space of predictions to realizable possibilities. Consequently, performance with model-based features may lead to better estimations and predictions than when only the raw data is used. Therefore, model-based features have a better probability of generalizing to different circumstances. Critically, the estimation of model parameters provides a window into the system under investigation without costly, invasive, or difficult to perform procedures. Model based approaches uniquely describe the inner workings of a subject in order for appropriate, personalized, optimal actions to be taken.
[0021] In this invention, we introduce multi-scale vocal features derived from neurophysiological computational models of speech production. One such model is the Directions into Velocities of Articulators (DIVA) model [13]. DIVA is a neural network model connecting acoustics, physiology, and neurophysiology. Given a behavioral (acoustic) measurement of speech, through an iterative learning process, the model computes parameters that correspond to different aspects of the speech
production process including articulatory commands and auditory and
somatosensory feedback errors. The DIVA system has been used to study speech generation, including speech disorders, as DIVA relates to the structure and function of the brain. The DIVA model is a magnetic resonance imaging (MRI)-validated neurocomputational model of speech production developed out of Boston University by Frank Guenther and colleagues [26]. The primary aim of the DIVA model was to provide a neurobiologically plausible, quantitative, system level description of low level speech motor control. The neurobiological plausibility follows from the validation of different aspects of the model correlating with different sensorimotor speech production and perception experiments conducted with functional MRI (fMRI). DIVA is a quantitative model as it consists of a series of equations that describe the inputs, outputs, and processing of each functional block. DIVA models the brain at the systems level. It abstracts away biophysical details such as neurotransmitter binding kinetics or neuronal spike timing dynamics. Over DIVA's decades of development, DIVA has been used almost exclusively for explaining principles of low level speech motor control. DIVA hypothesizes that the brain has auditory and somatosensory targets when producing speech, and that there are feedforward and feedback mechanisms, embodied by DIVA's equations, that govern the movement of the speech articulators associated with the vocal tract (e.g., lips, tongue, and jaw). At no point prior to this work has DIVA been used as a tool for quantitative assessment of neurophysiological health for individual subjects.
[0022] Figure 1 illustrates the DIVA model. A forward model represents the transformation from muscle to acoustic and somatosensory space during speech based on the collection of experimental data from human subjects performing the speech task. The DIVA system monitors the performance of speech
characteristics, such as formants, resulting from operation of the forward model. The target formants resulting from actual speech of the subject are compared to the formant's predicted by the model to produce an error 105 that is applied back through an inverse model 107. The DIVA model also incorporates somatosensation as an output that can be compared to target somatosensation to produce an error 109 that is returned through an inverse model 111. The processed errors are combined to
provide vocal tract feedback that modifies the motor plan, that is control parameters to the forward model. The control parameters are modified in a loop to reduce the error from the target signals. After multiple iterations, the updated control parameters define the control to the forward model that generates the target speech, allowing for study of speech generation.
[0023] Here, we recognize that the control parameters of DIVA represent a mapping from a low-dimensional feature space to a high-dimensional space that captures the underlying neural process of speech production, and thus is a first-of- its-kind speech feature representation to be used in assessing speech disorders. This novel feature representation, with the possibility of additional features extracted from the initial novel feature representation (e.g., the coordination characteristics of these features), provides the basis for a unique set of classifiers of condition state and predictors of condition severity.
[0024] Throughout the following description, parameters of the model may be referred to as control parameters. However, control parameter is an umbrella term for all numerical values (single values or timeseries) that are input to the system, derived through intermediate processing steps, and output from the system. For example, control parameters of the speech control model include but are not limited to articulatory trajectories, neural muscle activations, acoustic signals, neural synaptic weights, model delays, model gains, and neural firing rates throughout the speech production-speech perception control loop. Furthermore, we use the term "speech-related" signal as an umbrella term for all biological signals related to speech. Speech-related signals include but are not limited to the acoustic speech waveform, non-acoustic signals such as accelerometer measures of the vocal source, facial expressions during speech production, neural activity during speech production such as that measured by magnetic resonance imaging,
electroencephalography, magnetoencephalography, or positron emission
tomography, articulator positions such as those measured by electroarticulography, electromyography of the body, and respiratory measurements.
[0025] To date, we have applied this approach using features derived from
(neural) feedforward articulatory commands and auxiliary source variables, but it is
also applicable to neural features derived from auditory and somatosensory feedback errors. Also, to date the approach has been applied to prediction of the severity of Parkinson's disease and major depressive disorder, but it is applicable to any condition that can be modeled with neural computational structures. Examples include traumatic brain injury (TBI), Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig's Disease, Multiple Sclerosis (MS), and dementia, as well as effects due to physical and cognitive stress and sleep disorders, all of which show voice aberrations. Tele-monitoring of treatment for such conditions and detection of early stages in the disease are also an important application area. In addition to the broad sensitivity of neurocomputational model based features, these features may offer greater specificity and consequently allow discrimination between different neurological disorders using an inferred brain basis.
[0026] One embodiment of the present invention is illustrated in Fig. 2. To collect speech-related data, a protocol is established at 201. The protocol is tuned to cognitive and motor articulatory and laryngeal declines of interest. It may be based on free speech or any one of many known protocols including reading passages such as the Rainbow passage [29], the Caterpillar passage [28], the Grandfather passage [30, 31] or the North Wind [32] passage. Using the protocol, data is collected at 203 from many subjects including those with disorders of interest. Example disorders include Parkinson's disease, depression, ALS, TBI, Alzheimer's, autism and schizophrenia. From the collected data, features of the speech are extracted and used as auditory targets. For example, to use the standard DIVA model, three formant frequencies and the fundamental frequency are extracted. In an alternative, using a vocal source model, fundamental frequency only is extracted. Many other speech features for comparison, such as phoneme-based durations, vocal source aspiration, voice/unvoiced states, and respiratory modulation, can also be used as DIVA targets. The extracted data is applied to a neurophysiological computational model that performs an inverse operation to go from speech features to internal model parameters 207. The model may, for example, be DIVA model. Data developed in the model 207 results in a higher dimensional representation than the data directly extracted from the speech. That data may be converted to yet higher
dimensions through a relational features process 209 which may, for example, be based on correlation or coherence or other statistical or deterministic
transformations. Either the data directly from the neurophysiological computational model or the relational features output, or both together are stored in a library 211 using any of the various pattern matching models such as the Gaussian Mixture Model (GMM), Support Vector Machines (SVM) or Deep Neural Network (DNN) at 211 [33].
[0027] In subsequent use of the system to analyze a specific individual, data is collected at 203 using the same protocol 201 to extract the target data at 205 to be applied to the neural computational model 207. The data derived from that model is similarly converted at 209 if the library at 211 is based on such converted data. The data of the test individual is then compared to the stored data for individuals of known disorders according to the appropriate model at 211 to obtain a prediction of a disorder for the individual at 213. The output from 213 may include a probability that the individual suffers the disorder and/or a prediction of the severity of the disorder.
[0028] To add technical detail to Figure 2, we can elaborate on the full process and the individual functional steps. The method for estimation of neurological disorders from vocal biomarkers begins with collection of speech data from subjects. We can describe the latent neurological state by a categorical or real valued random variable Z. If Z is categorical, then Z represents one of many possible disorders and would be used for classification (e.g, Parkinson's disease vs ALS vs depression). If Z is real valued, it may represent the severity of a disorder. In depression, Z could correspond to the depression severity from a self report form or a clinician assessment. The neurophysiology of the individual can abstractly be represented by a function, Θ = f(Z), that transforms the neurological disorder of the person into a series of unobserved neurophysiological states that encompass all of the model parameters of the person Θ. The neurophysiological states give rise to observable speech-related signals, X, which include vocal biomarkers that can be measured through microphones in a clinical, home, or mobile application setting. This transformation is represented as X = G(0).. The ultimate goal of any
estimation method is to determine Z using X. A key step in our approach is to infer and leverage the intermediate values Θ to estimate Z.
[0029] The nonlinearity of the control model and critically the
neurophysiologically constrained space result in additional information about the subject in Θ than can be obtained with the speech related features alone.
Specifically, we are using known neurobiophysical mechanisms to constrain the composite mapping function X = G(f(Z)), and we explicitly leverage the latent parameters Θ. The latent parameters are estimated through an inversion process. Further detail on the inverse neurocomputational model box is deferred to the following section, so we continue with the system diagram.
[0030] For every subject, we estimate the latent parameters from their observed vocal waveform. We then collect these subject-specific parameters in a library of patterns 0k ~ L(Z), where 0kis a subject-specific parameter set added to the library L(Z). We also create additional patterns, γ^, where γ^= H(0k) and H represents additional processing functions 209 such as correlation analysis, or summary statistics such as minimum, maximum, mean, median, mode, standard deviation, and range. As part of building the library 211, we know subjects are associated with a particular disorder, Z or have a severity measure. The full set of patterns (0k, yk, z) for all k subjects and z disorders are processed in a final step by a machine learning algorithm or ensemble of algorithms, M. M may be a Gaussian mixture model, a deep neural network, a support vector machine [33], or a random forest to name a few examples. Using the training data, which consists of tuples (0k, Yk, z), M learns a relationship between the features (0k, yk, z) and the disorder. The process of learning the relationship between features and disorders is called training. Using the learned relationship from the training data, M takes a tuple of (0;, Yj) for subject /', with unknown neurophysiology z, and predicts the disorder, z. We distinguish z from z by noting z as the true disorder and z as the estimate. The process of predicting an unknown disorder from features is called testing.
[0031] When a new subject with unknown pathology is presented to the system, the algorithm estimates the latent parameters,©;, and performs processing to create additional patterns, γ;. As stated previously, the tuple(0j,yj) is then
processed by the machine learning algorithm M to return an estimate of the neurological disorder or disorder severity z. In both the training and testing phase, the feature tuple (θ, γ) may contain either Θ or γ or both values.
[0032] In our invention, we take advantage of a neurologically plausible, fMRI-validated computational model of speech production, the Directions into Velocities of Articulators (DIVA) model [13], though our approach is compatible with sensorimotor neurological modeling more generally. The DIVA model takes as inputs speech formants (vocal tract resonances) [1][2] and the fundamental frequency (pitch) of a speech utterance. Then, through an iterative learning process, the model computes a set of parameters that correspond to different aspects of the speech production process including articulatory commands and auditory and somatosensory feedback errors. We hypothesize that with a neurological, traumatic, or cognitive-stress condition, speech changes from impairments along the speech production (or modulating) pathway. Therefore, when the model is trained on speech from a condition, the internal variables will reflect the type and/or severity of the disorder. More generally, sensorimotor disorders will be reflected in changes to neural organizational principles of this type [10][11]. Using these internal "brain state" variables may offer greater specificity than non-neurocomputationally based features that do not attempt to model the underlying neuropathophysiology.
[0033] The mathematical framework of one embodiment is in Figure 3. In our frame work, a neurophysiological model is parameterized by a set of control variables 301 represented as Θ. These parameters are input to the neurophysiological computational model 303 which generates a predicted time series 305. The predicted time series is compared against measured values 307 from the subject, and the difference 309 is used in an inverse model 311 to update the model's estimates of the internal parameters 313.
The algorithm is as follows:
1. Initialization: Initialize model parameters Θ
2. Model Loop:
2a. Create a predicted multidimensional time series, indexed by time sample or iteration k, given model parameters,
2b. Compute error between predicted time series and measured time series, e[n] = F(Xk, X)
2c. Map error into model parameter correction
A0k = G ^eln]; Θ)
2d. Update model parameters,
0k+i = 0k + A0k
2e. Exit loop after N iterations or error criteria being reached
3. Optionally perform additional feature extraction in model parameter space (correlation analysis, coherence, multivariate regression),
[0034] In a conventional system, speech features would be extracted directly from
the measured data 307 for subjects of known disorder to collect data for a prediction library. Then, like data would be extracted for an individual being assessed and that data would be compared to the library. By contrast, here the extracted data is applied through a loop including the inverse model 311 and forward model 303 to develop model control parameters data over multiple iterations of the loop. It is those control parameters, or data converted from those control parameters, that are then stored in a library. When an individual is tested for a disorder, the individual's speech-related data is cycled through this process to develop parameters for that individual that can be compared to the stored data in the library.
[0035] Flowcharts for processing of the data in the system of Fig. 3 are presented in Fig. 4 and, for the case of additional statistical analysis on the control parameters, in Fig. 5. The neurophysiological control model 303 is first initialized by target control parameters θ 401. Speech is recorded, for example at a clinical, home or mobile source, at 403 and speech features used in the testing are extracted at 405. A recorded speech may be from subjects of known disorder in order to generate data to be stored in the library, or it may be from a test subject to be analyzed based on the stored data.
[0036] At 407, the extracted speech features from the individual are compared to that generated in the neurophysiological control model at 407 to generate the error signal 309. The error is then inverted in the neurophysiological model 311 to update the control parameters at 313. If the system has not yet cycled through a required number of iterations, or met a minimum error criteria, it returns
to apply the updated control parameters to the neurophysical control model through the loop 409. Once the control parameters have stabilized to the appropriate degree at 411, the system proceeds to 413. From here, during a training mode, the updated control parameters are stored with the known disorder in the control parameter library. On the other hand, during a test sequence for an individual, the updated control parameters for the individual are compared to those in the library 211 at 415 using the appropriate pattern matching model.
[0037] In the case where additional analysis is to be applied to the updated control parameters for feature extraction prior to library collection or comparison, the process is as illustrated in Fig. 5. The process is substantially the same as that in Fig. 4 except that an additional analysis, for example to extract correlation structure from the updated control parameters, is provided at 501. The library to 21 IB then stores that correlation structure data and likely also the original control parameters along with the disorder.
[0038] In one embodiment, the correlation structure features are derived from the DIVA model's 13 time-varying (feedforward) position states, which are sampled at 200 Hz. The input in this embodiment is 3 formant frequencies and the fundamental frequency. The formant tracks were extracted using a Kalman
Autoregressive Moving Average software [2] and the fundamental frequency with Praat [17].
[0039] In both training and testing, the model parameters that get updated are initialized to "neutral" configurations with respect to the model. For DIVA, the vocal tract parameters (1-10 of the 13 dimensional space) are set to a neutral/open configuration of zero. Dimensions 11, 12, and 13 are the fundamental frequency, source air pressure, and voicing indicator. Fundamental frequency is initialized to 0.0, source air pressure is set to -0.5, and voicing is initialized to 0.0. Software for the DIVA component was downloaded from [25] and modified to fit in Fig 4.
[0040] The DIVA auditory targets are zones centered around the true target.
The error signal is modulated depending on whether and how far the sensed production falls outside the auditory target zone. The fundamental frequency and formants are set to 90% and 110% of the participant specific extracted values. Praat
parameters included a 1 ms time step, the autocorrelation method of fundamental frequency determination, and a minimum and maximum range of 75 and 600 Hz. If a fundamental frequency was not detected, the minimum and maximum ranges were set to 1 Hz and 1000 Hz respectively.
[0041] For DIVA, the somatosensory targets are also specified in terms of zones. The minimum and maximum placement of articulators for six of the eight somatosensory signals are initialized to -1 and -0.25. The somatosensory targets for pressure and voicing minimum and maximum are initialized to 0.75 and 1.0.
[0042] This process represents a 3-to-13 expansion to a higher-level neural space. Articulatory states comprise positions and velocities of articulators such as of the tongue, jaw, lips, and larynx. We performed 10 model iterations and output the 13 articulatory features at the conclusion of the 10 iterations. These features were further processed using an advanced form of intra and interfeature correlation analysis, but they can be processed by any desired machine learning framework to extract "features of features" [16].
[0043] In another embodiment of the invention [27], we created an entirely new software instantiation of the process in Figure 2 using the model of Figs. 6 and 7 in order to have a neural model of the vocal source. The overall system diagram is similar to [22] in that both systems are modeling feedforward and feedback control of the vocal source, but critically different in that our system performs a
sensorimotor transformation step and uses a neurobiophysical component as part of the sensorimotor transformation.
[0044] The embodiment of this invention follows the block and flow diagrams in Figures 2-5, but we provide specific engineering details. Our procedure for estimating neural signals that control the vocal source, followed by further processing in order to assess neurological disorder severity follows the essential principles of the invention. Those skilled in the art will recognize that the specific embodiment we discuss need not limit the invention to the vocal source.
[0045] Specifically, the vocal source controls a person's fundamental frequency, perceived as pitch, through the interaction of several muscles. Two key muscles in the modulation process are the cricothyroid and thyroarytenoid muscles.
These two muscles are controlled by the brain through branches of the tenth cranial nerve, the vagus nerve. For the sake of brevity, we will refer to the neural control of the thyroarytenoid muscle as aTH and the neural control signal of the cricothyroid muscle as aCT. The objective of the full neurophysical computational model will be to estimate these unobserved neural control signals using a speech sample.
[0046] In this alternative embodiment, the model in Fig. 6 is used in the diagrams of Figs. 3-5. Here, the forward model is of control of the vocal source exclusively and in high detail as opposed to an emphasis on the vocal tract in Fig. 1. The mapping function of the forward model 601 can be seen in the numerical visualization of Fig. 7. The visualization is of the true neurobiophysical mapping function (circles) and quadratic approximation (X's). Truth values were obtained from fundamental frequency estimates of the glottal waveform, and approximation values were generated from the second order polynomial fit with ordinary least squares to the true data. The glottal waveform was simulated offline for a canonical speaker in order to create the truth data for the mapping function.
[0047] In this embodiment, the forward model 601 drives the auditory system
(implemented as an identity function) 603 to provide a fundamental frequency which is compared to the subject's fundamental frequency to generate an error 605. The error 605 is applied through an inverse model 607 to create the feedback based update to the control parameters. The feedforward and feedback gains are omitted for clarity. In this embodiment, the control parameters are the muscle control signals aTH and aCT. Unlike the conventional DIVA model, the somatosensory feedback was not considered necessary, but could be added in future realizations.
[0048] A key, novel component of this embodiment is a biophysically inspired model of the two major muscles and the activations associated with them in the vocal source. We need to establish a mathematical relationship between the neural activation signals and the fundamental frequency. We develop this relationship by building upon a prior biophysical model of the vocal source [18-20]. In particular, we quantify the relationship between the two muscle activations and the vocal source by fitting a second order, two-dimensional polynomial with cross terms to data generated from a model of the vocal source.
[0049] The model of the vocal source is an ordinary differential equation of a canonical larynx. Two input parameters to the model are muscles activations.
However, the output of the model is a glottal waveform (the puffs of air that exit the vocal folds before being shaped into speech by the vocal tract). The frequency of the puffs of air is the fundamental frequency of the ultimately produced speech waveform. We create a table of neural input strengths for each of the two muscle activations, from least activated, to most activated, and compute the glottal waveform for each muscle activation tuple. A muscle activation tuple is the joint specification of aCT and aTH, written as (aCT, aTH). For each glottal waveform, we use a threshold-based peak detection algorithm to identify the periodic puffs of air. We compute the difference in time between each air puff in order to estimate the fundamental frequency for the tuple. We take the median estimated fundamental frequency to be robust to errors in peak detection. This procedure is repeated for each tuple in our list of muscle activations.
[0050] With a complete list of muscle activations and corresponding fundamental frequency, we use ordinary least squares to fit a paraboloid of the form z = Ax2 + Bxy + Cy2 + Dx + Ey + F, where A,B,C,D,E,F are constant coefficients estimated from our list of tuples. The aCT and aTH values are x and y, and z is the fundamental frequency. Thus we have established a functional relationship between aCT, aTH, and fundamental frequency.
[0051] To estimate aCT, and aTH values, which are unknown for a person's spoken speech, we need to estimate the fundamental frequency for the speech, and we must iterate through the process diagrammed in Figure 3-5. The extraction of fundamental frequency is accomplished using the freely available Praat software, but those skilled in the art will recognize that any fundamental frequency estimation algorithm could substitute equally well.
[0052] As an additional step, we normalize each person's fundamental frequency trajectory by adding or subtracting an offset, if necessary. This normalization step changes the absolute value of the person's fundamental frequency trajectory, and it is only applied under certain conditions. The step places the person's fundamental frequency within the fundamental frequency bounds that
would be appropriate for our derived functional relationship between muscle activation and fundamental frequency. We also excise non-voice regions of speech and concatenate the remaining voiced pieces (ie., we only analyze the portions of the speech waveform that have a non-zero fundamental frequency). Unvoiced speech is not analyzed but can be included in expanded speech feature and auditory and somatosensory target sets.
[0053] To advance through the model described in Figure 3-5, we begin by initializing the model's two muscle activations to a medium level of activation. Through iterative refinement, we will improve our estimate of the unobserved muscle activations as follows.
[0054] First, we initialize the auditory target of the brain to the observed, normalized fundamental frequency we have extracted from the speech waveform. Then, we use the current level of muscle activations and our functional relationship to generate the corresponding fundamental frequency fO at the first instant of time in the speech waveform. After that, we compare the generated fO value to the auditory target fO value for the corresponding instant of time. The comparison is done with a difference operation, but more sophisticated error generation mechanisms could be used, such as a nonlinear transformation applied to the difference operation. The error signal, in conjunction with an estimate of the muscle activations that were used to generate the fO value, are used to create a muscle update command. We call the creation of the muscle update command from the error signal the sensorimotor inversion process.
[0055] The mathematical operation that implements the sensorimotor inversion makes use of the pseudoinverse of the Jacobian of the functional relationship between the muscle activations and the fundamental frequency. We establish the Jacobian by taking the matrix of partial derivatives associated with the functional relationship. Because the matrix is not square, no inverse exists.
Therefore, we use the Moore-Penrose pseudoinverse to create a pseudoinverse of the Jacobian. The multiplication of the pseudoinverse by the error signal results in an update command.
[0056] The motor feedback command is delayed by a suitable portion of time to represent the neurophysiological auditory delay in humans. Additionally, the motor update command is scaled by a feedback gain factor before being added to the current feedforward motor gain. The delay, nd, is introduced by computing an error update at time, n, using the produced signal, y [n— nd] as opposed to y [n] . The feedforward, and feedback a^b gains weight the feedforward, Xff [n] and feedback Xfb [n] motor commands to create the composite command, x[n] : x[n] = a.ffXff [n\ + cCfbXfb [n] . The gains are always both positive, sum to one, and are chosen empirically for system performance. The gains and delays are kept constant for all subjects but, like the aCT and aTH values, could also be learned in other embodiments. The new, composite motor plan is then used to generate a new fundamental frequency value. The new fundamental frequency value is compared to the auditory target for the timestep, an error signal is generated, the sensorimotor transformation takes place, and a new motor feedback update is generated. This process continues for each time sample in the fundamental frequency target derived from a person's speech. When the last sample is generated, all of the motor updates from the error signals are used to modify the feedforward neural plan in the model's brain. The updated feedforward neural plan is then used, and the whole process is repeated. Thus by iterating both within and across time, gradually, the auditory signal generated using the iteratively updated neural plan will converge or the iterative process will be stopped after a fixed number of iterations. Upon
conclusion, the estimated neural aCT and aTH values are taken from the model and used in the next phase of the process.
[0057] In the latent space discovered by the model, the two time series inferred, the aCT and aTH neural activations, are used in a further estimation step to extract additional patterns for the subject under evaluation. Each of these time series individually may contain information that is representative of a neurological disorder. However, we take the additional step in our algorithm by analyzing the interaction between the time series. Thus, another novel aspect of this invention is the recognition that the coordination between the time series is crucial for estimating informative features. No prior work has used the coordination between neural
signals as a defining contribution to their methodology. Though we used the coordination in this specific embodiment, other relations within and between the time series may also be informative and provide complimentary information.
[0058] Specifically, we process the two time series using the cross correlation procedure first proposed by Williamson et al [16]. Briefly, we compute the autocorrelation of each time series and their cross correlation to create four new sequences of data. Each time correlation sequence is subsampled at different delay scales in order to fill a block matrix. The eigenvalues of the matrix are computed and used as features. The application of the Williamson et al [16] correlation code to the muscle activation timeseries is a unique step in the process, and it completes the procedure for estimating patterns that may be indicative of a neurological disorder.
[0059] Pattern estimation per person is one step of the process. The full system requires estimating these patterns for many subjects who are known to have a disorder to create a library of patterns. Then, a new subject with an unknown disorder is analyzed to create their pattern. The new subject's pattern is compared against the library of patterns using the machine learning technique Extremely Randomized Trees. Other machine learning techniques such as support vector machines or Gaussian Mixture Models [33] are alternative or even complimentary algorithms to the Extremely Randomized Tree algorithm [21].
[0060] Although our description above is specific to speech and Parkinson's disease and major depressive disorder (the two application areas thus far addressed), the key contribution is that modeling the system parameters (or "state space") provides a novel way of defining features. The forward computational model is not only a model of the neurological underpinnings of motor behavior but also of the interactions of the neural substrate with physical articulators. Due to physical constraints, the speech articulators and (other) components of the speech system can jointly occupy only a small fraction of the articulator state space, and the effect of neural damage on articulator trajectories or of damage to the articulators themselves on those trajectories could be captured in a model, and therefore could provide useful features. The general space covers vocal, biomechanical, kinematic, and
dynamic models of all aspects of speech at multiple time scales from phones to sentences. We are explicitly including the idea that our forward model (for example, DIVA) employs the physical constraints governing joint articulator movements (and others, such as laryngeal and respiratory muscles), so this provides benefit when the inverse mapping from measurement space (for example, audio features) back into the model state space is performed. The general concept for this application is that modeling the control parameters of a physical system rather than just the output will provide greater benefit (i.e. lower error).
Advantages over existing methods
[0061] Although there has been significant effort in using potential vocal (and other sensorimotor) biomarkers for neurological disorder, stress, and emotion classification, there has been no exploitation of biomarkers that result from inverse engineering neural models of the condition. Many systems focus on data centric, acoustic features such as jitter, shimmer and mel frequency cepstral coefficients. These features are then combined in a regression or classification framework such as an artificial neural network or Gaussian Mixture Model for affect estimation or disorder severity prediction [3]. By contrast, a branch of computational psychiatry specifically seeks to use observed patient features in order to fit a neurobiologically inspired representation of the disorder that encapsulates an explanation of the observed symptoms [12]. Our model is related in direct fashion to known neurobiological function, and relationships that are localized in the brain, larynx, and vocal tract. This neurobiological focus on the speech system distinguishes our approach from [12] which emphasizes psychological process models that describe response inhibition or decision making for example. . Consequently, our parameter estimates for severity may point more directly to causes of dysfunction and possible points of treatment than traditional approaches. Prior work [14,15] has
demonstrated that inferring articulatory parameters from acoustic data improved speech recognition error rates. However, in [15] this inference was not based on a neurophysical computational model, but from known mappings between the biomechanics and acoustics. It used an implicit model of articulator position to
acoustic output. The known mappings are coarse approximations such as vowel height, and tongue front-back position. By contrast, the DIVA mappings (and other speech related sensorimotor mappings) may have significant specificity, and allow building a model to match a specific speaker instead of a generic speaker, while also allowing specificity across disorders. In our case, individualization occurs because we are reproducing a given speaker's formant trajectories and other speech features for an utterance rather than one generic set of formant tracks for the utterance. Other forms of individualization could incorporate additional information about their medical history or demographics (age, gender, height, weight), structural information derived from MRI scans of the brain, articulators, and larynx, and current or past medication consumption, and smoking history.
[0062] The advantages of the neurophysical computational model over a purely biomechanical model such as [23, 24] are three fold. First, the neurophysical model provides a holistic description of the speech motor system and its dynamics. Consequently, it affords multiple points of insight into the underlying
neurophysiology. Second, the inversion process that is part of the
neurophysiological model takes place in a neurobiologically plausible manner. Consequently, the estimated patterns of activity may be more accurate and less prone to artifacts than methods that take a purely statistical approach to parameter estimation. Third, the neurophysiological model provides a direct avenue for assessment of neurobiological structures that are known to be directly involved in different disorders. For example, Parkinson's disease is characterized by degradation of the substantia nigra pars compacta, a specific region of neural tissue deep in the brain. In a biophysical model that is limited to only considering the movement of the laryngeal muscles or the speech articulators, patterns that are estimated from these components of speech system may be noisy or non-specific. By estimating level of activity in key brain areas through the effect of these areas on the core speech network and in turn on the laryngeal muscles and articulators, a neurophysiological model can be both more sensitive to the presence of a disorder and more discriminating between different disorders as different disorders may have differential effects on neuronal regions.
Commercial Applications
[0063] There is growing interest in the use of vocal biomarkers for detecting changes in mental condition and emotional state, which reflect underlying changes in neurophysiology. Example conditions include Major Depressive Disorder
(MDD), Parkinson's, and speech, language, and articulation disorders. Other applications in which neuro-motor and neuro-physiological coordination can break down include early onset detection of traumatic brain injury (TBI), Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig's Disease, dementia
(including Alzheimer's disease) and Multiple Sclerosis (MS), as well as effects due to physical and cognitive stress and sleep disorders, all of which show voice (and other sensorimotor) aberrations in early stages. Tele-monitoring of treatment with all such conditions is also an important application area.
[0064]
References
[I] D. Rudoy, D. N. Spendley, and P. Wolfe. Conditionally linear Gaussian models for estimating vocal tract resonances, Proc. Interspeech, 526-529, 2007.
[2] D. Mehta, D. Rudoy, and P. Wolfe. Kalman-based autoregressive moving average modeling and inference for formant and antiformant tracking. The Journal of the Acoustical Society of America, 132(3), 1732-1746, 2012.
[3] J. R. Williamson, T. F. Quatieri, B. S. Heifer, R. Horwitz, B. Yu, D. D. Mehta. Vocal biomarkers of depression based on motor incoordination. AVEC 2013.
[4] J. Mundt, P. Snyder, M. S. Cannizaro, K. Chappie, and D. S. Geralts. Voice acoustic measures of depression severity and treatment response collected via interactive voice response (IVR) technology. J. Neurolinguistics, 20(1): 50-64, 2007.D.
[5] T. F. Quatieri and N. Maly ska. Vocal-Source Biomarkers for Depression: A Link to Psychomotor Activity. Interspeech,
2012.
[6] A. Trevino, T. F. Quatieri, and N. Malyska., Phonologically -Based Biomarkers for Major Depressive Disorder. EURASIP Journal on Advances in Signal Processing: Special Issue on Emotion and Mental State Recognition from Speech, 2011(1), 1-18, 2011.
[7] D. Sturim, P. Torres-Carrasquillo, T. F. Quatieri, N. Malyska, and A. McCree. Automatic Detection of Depression in
Speech using Gaussian Mixture Modeling with Factor Analysis. Interspeech, 2011.
[8] B. S. Heifer, T. F. Quatieri, J. R. Williamson, D. D. Mehta, R. Horwitz, and B. Yu. Classification of depression state based on articulatory precision. Interspeech, 2013.
[9] Asgari, Meysam, Izhak Shafran, and Lisa B. Sheeber. "Inferring clinical depression from speech and spoken utterances."
Machine Learning for Signal Processing (MLSP), 2014 IEEE International Workshop on. IEEE, 2014.
[10] E. Tognoli and J. A. Scott Kelso. Brain coordination dynamics: True and false faces of phase synchrony and metastability. Prog Neurobiol. 87(1): 31-40, 2009.
[I I] S. L. Bressler, E. Tognoli, Operational principles of neurocognitive networks. International Journal ofPsychophysiology 60: 139-148, 2006.
[12] Wiecki, Thomas V., Jeffrey Poland, and Michael J. Frank. "Model-Based Cognitive Neuroscience Approaches to
Computational Psychiatry Clustering and Classification." Clinical Psychological Science 3.3 (2015): 378-399.
[13] Tourville, Jason A, and Frank H Guenther. "The DIVA model: A neural theory of speech acquisition and production."
Language and Cognitive Processes 26.7 (2011): 952-981.
[14] King, Simon, et al. "Speech production knowledge in automatic speech recognition." The Journal of the
Acoustical Society of America 121.2 (2007): 723-742.
[15] Livescu, Karen. Feature-based pronunciation modeling for automatic speech recognition. Diss.
Massachusetts Institute of Technology, 2005.
[16] Williamson, J.R., Bliss, D., Browne, D.W., and Narayanan, J.T., Seizure prediction using EEG spatiotemporal correlation structure. Epilepsy Behav., vol. 25, no. 2, pp. 230-238, 2012.
[17] Boersma, Paul & Weenink, David (2015). Praat: doing phonetics by computer [Computer program].
http://www.praat.org/
[18] I. R. Titze and B. H. Story, "Rules for controlling low-dimensional vocal fold models with muscle activation," The
Journal of the Acoustical Society of America, vol. 112, no. 3, pp. 1064-1076, 2002.
[19] B. H. Story and I. R. Titze, "Voice simulation with a body-cover model of the vocal folds," The Journal of the
Acoustical Society of America, vol. 97, no. 2, pp. 1249-1260, 1995.
[20] M. Zaflartu Salas "Influence of acoustic loading on the flow induced oscillations of single mass models of the human larynx," Ph.D. dissertation, Purdue University West Lafayette, 2006
[21] P. Geurts, D. Ernst, and L. Wehenkel, "Extremely randomized trees," Machine learning, vol. 63, no. 1, pp. 3-42, 2006.
[22] Larson, Charles R., et al. "Interactions between auditory and somatosensory feedback for voice F 0 control." Experimental Brain Research 187.4 (2008): 613-621.
[23] Gomez-Vilda, Pedro, et al. "Characterizing neurological disease from voice quality biomechanical analysis." Cognitive
Computation 5.4 (2013): 399-425.
[24] Gomez-Vilda, Pedro, et al. "Glottal source biometrical signature for voice pathology detection." Speech
Communication 51.9 (2009): 759-781.
[25] "DIVA Source Code" DIVAsimulink7-6. Boston University. ttp: /www.bii.6dii'FJpeechlab software/diva-sotu\x-cod6/ [26] Guenther, Frank H., Satrajit S. Ghosh, and Jason A. Tourville. "Neural modeling and imaging of the cortical interactions underlying syllable production." Brain and language 96.3 (2006): 280-301.
[27] Ciccarelli, Gregory, Thomas Quatieri, Satrajit Ghosh "Neruophysiological Vocal Source Modeling for biomarkers of
Disease. In Interspeech 2016, to appear.
[28] Patel, Rupal, et al. ""The Caterpillar": A Novel Reading Passage for Assessment of Motor Speech Disorders." American
Journal of Speech-Language Pathology 22.1 (2013): 1-9.
[29] Fairbanks, Grant, ed. Voice and articulation: Drillbook. Harper & Brothers, 1940.
[30] Darley, F. L., Aronson, A. E., Brown, J. R. (1975). Motor speech disorders. 3rd ed. Philadelphia, PA W.B. Saunders Company
[31] Van Riper, C. (1963). Speech correction. 4th ed. Englewoood Cliffs, NJ Prentice Hall
[32] Aesop (2016) The North Wind and the Sun. In. tmps^Ven.wikipedia.ovg iki i'he North Wind and ihe Sx.-n
[33] Bishop, Christopher M. "Pattern recognition." Machine Learning 128 (2006)..
Claims
1. A method of assessing a condition of a subject, the method comprising, in an automated system:
extracting speech features from data from a subject; comparing the extracted speech features to predicted speech features from a neurophysiological computational model to obtain an error signal; inverting the error signal through an inverse of the
neurophysiological computational model to update internal parameters of the neurophysiological computational model;
processing the updated internal parameters as biomarkers in assessing the subject.
2. The method of Claim 1 further comprising applying updated internal
parameters to the neurophysiological computational model in multiple iterations of the steps of comparing, inverting and applying to obtain updated internal parameters to be processed.
3. The method of any preceding claim further comprising performing an
analysis of the updated internal parameters to extract features for assessing the subject.
4. The method of any preceding claim wherein a correlation structure for the subject is extracted from the updated internal parameters in the step of processing.
5. The method of any preceding claim wherein the extracted speech features and the predicted speech features include representations of vocal source.
6. The method of any preceding claim wherein the extracted speech features and the predicted speech features include representations of vocal tract.
7. The method of any preceding claim wherein the extracted speech features and the predicted speech features include time series representations.
8. The method of any preceding claim wherein the subject is assessed for a neurological
disorder.
9. The method of any of claims 1-7 wherein the subject is assessed for depression.
10. The method of any of claims 1-7 wherein the subject is assessed for
Parkinson's disease.
11. The method of any of claims 1-7 wherein the subject is assessed for
traumatic brain injury.
12. The method of any of claims 1-7 wherein the subject is assessed for the
effects of cognitive stress.
13. The method of any preceding claim wherein the neurophysiological
computational model is a muscular control model and the internal parameters are control parameters.
14. An automated system for assessing a condition of a subject comprising:
a forward neurophysical comuputational model that responds to internal parameters to predict speech features;
an error generator that generates an error signal between the predicted speech features and speech features extracted from data of a subject;
an inverse neurophysical computational model that responds to the error signal to update the internal parameters to the forward neurophysical computational model; and
a disorder predictor that predicts a disorder for a subject based on the updated internal parameters for the subject.
15. The system of claim 14 further comprising a statistical analyzer that extracts features from the updated internal parameters to be processed by the disorder predictor.
16. The system of claim 14 or 15 further comprising a correlation structure
extractor that extracts a correlation structure from the updated control parameters to be stored in the library.
17. A system of claim 14, 15, or 16 wherein the forward neurophysiological computational model predicts representations of vocal source.
18. The system of any of claims 14-17 wherein the forward neurophysiological computational model predicts representations of vocal tract.
19. A system of any of claims 14-18 wherein the forward neurophysiological computational model predicts speech features that include time series representations.
20. A system of any of claims 14-19 wherein the neurophysiological
computational model is a muscular control model and the internal parameters are control parameters
21. A method of assessing a condition of a subject, the method comprising, in an automated system:
receiving a speech related signal of the subject;
extracting speech features from the speech related signal;
comparing the extracted speech features to predicted speech features from a control model to obtain an error signal;
inverting the error signal through an inverse of the control model to update control parameters of the control model;
processing the updated control parameters in a comparison with parameters in a library associated with a disorder; and
assessing the subject based on the comparing step.
An automated system for assessing a condition of a subject comprising: a forward control model that responds to control parameters to predict speech features;
an error generator that generates an error signal between the predicted speech features and speech features extracted from a speech related signal of a subject;
an inverse control model that responds to the error signal to update the control parameters to the forward control model;
a library storing data derived from updated control parameters in association with the disorder; and
a disorder predictor that predicts a disorder for a subject based on updated control parameters for the subject and the data stored in the library.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562207259P | 2015-08-19 | 2015-08-19 | |
US62/207,259 | 2015-08-19 | ||
US201562214755P | 2015-09-04 | 2015-09-04 | |
US62/214,755 | 2015-09-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017031350A1 true WO2017031350A1 (en) | 2017-02-23 |
Family
ID=56889204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/047609 WO2017031350A1 (en) | 2015-08-19 | 2016-08-18 | Assessing disorders through speech and a computational model |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017031350A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285551A (en) * | 2018-09-18 | 2019-01-29 | 上海海事大学 | Disturbances in patients with Parkinson disease method for recognizing sound-groove based on WMFCC and DNN |
US10748644B2 (en) | 2018-06-19 | 2020-08-18 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11120895B2 (en) | 2018-06-19 | 2021-09-14 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
EP3821815A4 (en) * | 2018-07-13 | 2021-12-29 | Life Science Institute, Inc. | Mental/nervous system disorder estimation system, estimation program, and estimation method |
-
2016
- 2016-08-18 WO PCT/US2016/047609 patent/WO2017031350A1/en active Application Filing
Non-Patent Citations (37)
Title |
---|
"DIVA Source Code", DIVASIMULINK-7-6, Retrieved from the Internet <URL:http://www.bu.edu/speechlab/software/diva-source-code> |
"Voice and articulation: Drillbook", 1940, HARPER & BROTHERS |
A. TREVINO; T. F. QUATIERI; N. MALYSKA.: "Phonologically-Based Biomarkers for Major Depressive Disorder", EUR4SIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING: SPECIAL ISSUE ON EMOTION AND MENTAL STATE RECOGNITION FROM SPEECH, 2011, pages 1 - 18 |
AESOP, THE NORTH WIND AND THE SUN, 2016, Retrieved from the Internet <URL:http://en.wikipedia.org/wiki/The North Wind and the Sun> |
ASGARI, MEYSAM; IZHAK SHAFRAN; LISA B. SHEEBER: "Inferring clinical depression from speech and spoken utterances", MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2014 IEEE INTERNATIONAL WORKSHOP ON, 2014 |
B. H. STORY; I. R. TITZE: "Voice simulation with a body-cover model of the vocal folds", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 97, no. 2, 1995, pages 1249 - 1260 |
B. S. HEIFER; T. F. QUATIERI; J. R. WILLIAMSON; D. D. MEHTA; R. HORWITZ; B. YU: "Classification of depression state based on articulatory precision", INTERSPEECH, 2013 |
BISHOP. CHRISTOPHER M: "Pattem recognition", MACHINE LEARNING, 2006, pages 128 |
BOERSMA, PAUL; WEENINK, DAVID, PRAAT: DOING PHONETICS BY COMPUTER [COMPUTER PROGRAM, 2015, Retrieved from the Internet <URL:http://www.praat.org> |
CICCARELLI. GREGORY; THOMAS QUATIERI; SATRAJIT GHOSH: "Neruophysiological Vocal Source Modeling for biomarkers of Disease", INTERSPEECH, 2016 |
CUMMINS NICHOLAS ET AL: "A review of depression and suicide risk assessment using speech analysis", SPEECH COMMUNICATION, vol. 71, 6 April 2015 (2015-04-06), pages 10 - 49, XP029235996, ISSN: 0167-6393, DOI: 10.1016/J.SPECOM.2015.03.004 * |
D. MEHTA; D. RUDOY; P. WOLFE: "Kalman-based autoregressive moving average modeling and inference for formant and antiformant tracking", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 132, no. 3, 2012, pages 1732 - 1746, XP012163271, DOI: doi:10.1121/1.4739462 |
D. RUDOY; D. N. SPENDLEY; P. WOLFE: "Conditionally linear Gaussian models for estimating vocal tract resonances", PROC. INTERSPEECH, 2007, pages 526 - 529 |
D. STURIM; P. TORRES-CARRASQUILLO; T. F. QUATIERI; N. MALYSKA; A. MCCREE: "Automatic Detection of Depression in Speech using Gaussian Mixture Modeling with Factor Analysis", INTERSPEECH, 2011 |
DARLEY, F. L.; ARONSON, A. E.; BROWN, J. R.: "Motor speech disorders", 1975, PA W.B. SAUNDERS COMPANY |
E. TOGNOLI; J. A. SCOTT KELSO: "Brain coordination dynamics: True and false faces of phase synchrony and metastability", PROG NEUROBIOL., vol. 87, no. 1, 2009, pages 31 - 40, XP025771850, DOI: doi:10.1016/j.pneurobio.2008.09.014 |
GOMEZ-VILDA, PEDRO ET AL.: "Characterizing neurological disease from voice quality biomechanical analysis", COGNITIVE COMPUTATION, vol. 5.4, 2013, pages 399 - 425 |
GOMEZ-VILDA, PEDRO ET AL.: "Glottal source biometrical signature for voice pathology detection", SPEECH COMMUNICATION, vol. 51.9, 2009, pages 759 - 781, XP026223596, DOI: doi:10.1016/j.specom.2008.09.005 |
GUENTHER, FRANK H.; SATRAJIT S. GHOSH; JASON A. TOURVILLE: "Neural modeling and imaging of the cortical interactions underlying syllable production", BRAIN AND LANGUAGE, vol. 96.3, 2006, pages 280 - 301 |
I. R. TITZE; B. H. STORY: "Rules for controlling low-dimensional vocal fold models with muscle activation", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 112, no. 3, 2002, pages 1064 - 1076, XP012003023, DOI: doi:10.1121/1.1496080 |
J. MUNDT; P. SNYDER; M. S. CANNIZARO; K. CHAPPIE; D. S. GERALTS: "Voice acoustic measures of depression severity and treatment response collected via interactive voice response (IVR) technology", J. NEUROLINGUISTICS, vol. 20, no. 1, 2007, pages 50 - 64, XP005780157, DOI: doi:10.1016/j.jneuroling.2006.04.001 |
J. R. WILLIAMSON; T. F. QUATIERI; B. S. HEIFER; R. HORWITZ; B. YU; D. D. MEHTA: "Vocal biomarkers of depression based on motor incoordination", AVEC, 2013 |
JAMES R. WILLIAMSON ET AL: "Vocal biomarkers of depression based on motor incoordination", PROCEEDINGS OF THE 3RD ACM INTERNATIONAL WORKSHOP ON AUDIO/VISUAL EMOTION CHALLENGE, AVEC '13, 1 January 2013 (2013-01-01), New York, New York, USA, pages 41 - 48, XP055194796, ISBN: 978-1-45-032395-6, DOI: 10.1145/2512530.2512531 * |
KING, SIMON ET AL.: "Speech production knowledge in automatic speech recognition", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 121.2, 2007, pages 723 - 742, XP012096409, DOI: doi:10.1121/1.2404622 |
LARSON, CHARLES R. ET AL.: "Interactions between auditory and somatosensory feedback for voice F 0 control.", EXPERIMENTAL BRAIN RESEARCH, vol. 187.4, 2008, pages 613 - 621, XP019622034 |
LIVESCU, KAREN: "Feature-based pronunciation modeling for automatic speech recognition", DISS. MASSACHUSETTS INSTITUTE OF TECHNOLOGY, 2005 |
M. ZAÑARTU SALAS: "Influence of acoustic loading on the flow induced oscillations of single mass models of the human larynx", PH.D. DISSERTATION, 2006 |
P. GEURTS; D. ERNST; L. WEHENKEL: "Extremely randomized trees", MACHINE LEARNING, vol. 63, no. 1, 2006, pages 3 - 42, XP019213509, DOI: doi:10.1007/s10994-006-6226-1 |
PATEL, RUPAL ET AL.: "The Caterpillar'': A Novel Reading Passage for Assessment of Motor Speech Disorders", AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY, vol. 22.1, 2013, pages 1 - 9 |
S. L. BRESSLER; E. TOGNOLI: "Operational principles of neurocognitive networks", INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, vol. 60, 2006, pages 139 - 148, XP024954082, DOI: doi:10.1016/j.ijpsycho.2005.12.008 |
T. F. QUATIERI; N. MALYSKA: "Vocal-Source Biomarkers for Depression: A Link to Psychomotor Activity", INTERSPEECH, 2012 |
THOMAS F QUATIERI ET AL: "Vocal-Source Biomarkers for Depression: A Link to Psychomotor Activity", 1 January 2013 (2013-01-01), pages 2172 - 2176, XP055194819, Retrieved from the Internet <URL:https://www.ll.mit.edu/mission/cybersec/publications/publication-files/full_papers/2012_09_09_MalyskaN_Interspeech_FP.pdf> [retrieved on 20150610] * |
TOURVILLE J A ET AL: "The DIVA model: A neural theory of speech acquisition and production", 1 January 2011 (2011-01-01), pages 1 - 27, XP002762913, Retrieved from the Internet <URL:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3650855/pdf/nihms461273.pdf> [retrieved on 20161020], DOI: 10.1080/01690960903498424 * |
TOURVILLE, JASON A; FRANK H GUENTHER: "The DIVA model: A neural theory of speech acquisition and production", LANGUAGE AND COGNITIVE PROCESSES, vol. 26.7, 2011, pages 952 - 981 |
VAN RIPER, C.: "Speech correction", 1963, NJ PRENTICE HALL |
WIECKI, THOMAS V.; JEFFREY POLAND; MICHAEL J. FRANK: "Model-Based Cognitive Neuroscience Approaches to Computational Psychiatry Clustering and Classification", CLINICAL PSYCHOLOGICAL SCIENCE, vol. 3.3, 2015, pages 378 - 399 |
WILLIAMSON, J.R.; BLISS, D.; BROWNE, D.W.; NARAYANAN, J.T.: "Seizure prediction using EEG spatiotemporal correlation structure", EPILEPSY BELIAV., vol. 25, no. 2, 2012, pages 230 - 238, XP002740764, DOI: doi:10.1016/j.yebeh.2012.07.007 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748644B2 (en) | 2018-06-19 | 2020-08-18 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11120895B2 (en) | 2018-06-19 | 2021-09-14 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11942194B2 (en) | 2018-06-19 | 2024-03-26 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
EP3821815A4 (en) * | 2018-07-13 | 2021-12-29 | Life Science Institute, Inc. | Mental/nervous system disorder estimation system, estimation program, and estimation method |
CN109285551A (en) * | 2018-09-18 | 2019-01-29 | 上海海事大学 | Disturbances in patients with Parkinson disease method for recognizing sound-groove based on WMFCC and DNN |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10127929B2 (en) | Assessing disorders through speech and a computational model | |
CA2928005C (en) | Using correlation structure of speech dynamics to detect neurological changes | |
Tăuţan et al. | Artificial intelligence in neurodegenerative diseases: A review of available tools with a focus on machine learning techniques | |
US10438603B2 (en) | Methods of decoding speech from the brain and systems for practicing the same | |
Williamson et al. | Vocal and facial biomarkers of depression based on motor incoordination and timing | |
US20200260977A1 (en) | Method for diagnosing cognitive disorder, and computer program | |
Rigas et al. | Biometric identification based on the eye movements and graph matching techniques | |
Williamson et al. | Vocal biomarkers of depression based on motor incoordination | |
Belo et al. | Biosignals learning and synthesis using deep neural networks | |
Povinelli et al. | Statistical models of reconstructed phase spaces for signal classification | |
Mishra et al. | Characterization of $ S_1 $ and $ S_2 $ heart sounds using stacked autoencoder and convolutional neural network | |
WO2017031350A1 (en) | Assessing disorders through speech and a computational model | |
Tsai et al. | Embedding stacked bottleneck vocal features in a LSTM architecture for automatic pain level classification during emergency triage | |
Quatieri et al. | Multimodal biomarkers to discriminate cognitive state | |
EP3617964A1 (en) | Performance measurement device, performance measurement method and performance measurement program | |
Meghraoui et al. | Parkinson’s disease recognition by speech acoustic parameters classification | |
Unger et al. | A generalized procedure for analyzing sustained and dynamic vocal fold vibrations from laryngeal high-speed videos using phonovibrograms | |
Pereira et al. | Physiotherapy Exercises Evaluation using a Combined Approach based on sEMG and Wearable Inertial Sensors. | |
Nasrolahzadeh et al. | Analysis of mean square error surface and its corresponding contour plots of spontaneous speech signals in Alzheimer's disease with adaptive wiener filter | |
Mekyska et al. | Perceptual features as markers of Parkinson’s Disease: the issue of clinical interpretability | |
Elden et al. | A computer aided diagnosis system for the early detection of neurodegenerative diseases using linear and non-linear analysis | |
Selvakumari et al. | A voice activity detector using SVM and Naïve Bayes classification algorithm | |
Tena et al. | Voiceprint and machine learning models for early detection of bulbar dysfunction in ALS | |
Khorrami | How deep learning can help emotion recognition | |
Schleusing et al. | Monitoring physiological and behavioral signals to detect mood changes of bipolar patients |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16763128 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16763128 Country of ref document: EP Kind code of ref document: A1 |