WO2006109268A1 - Procede et dispositif de detection automatique de troubles du langage - Google Patents

Procede et dispositif de detection automatique de troubles du langage Download PDF

Info

Publication number
WO2006109268A1
WO2006109268A1 PCT/IB2006/051144 IB2006051144W WO2006109268A1 WO 2006109268 A1 WO2006109268 A1 WO 2006109268A1 IB 2006051144 W IB2006051144 W IB 2006051144W WO 2006109268 A1 WO2006109268 A1 WO 2006109268A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
person
language analysis
analysis
oral response
Prior art date
Application number
PCT/IB2006/051144
Other languages
English (en)
Inventor
Andreas Brauers
Andreas Kellner
Gerd Lanfermann
Jurgen Te Vrugt
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006109268A1 publication Critical patent/WO2006109268A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Definitions

  • the invention relates to the field of medical detection systems, especially to the field of automated medical systems. More particularly, the invention comprises a method and an apparatus for automated detection of a change in speech of a patient based on an evaluation of his/her speech performance. This change in speech may form part of an automated stroke detection system.
  • a patient can suffer from a stroke without being aware of it, it is important that people suspect to having a stroke, such as patients with a prior stroke, undergo regular observation with respect to their health state.
  • Tests for motor and speech disorders are often used by professionals to detect stroke.
  • the most prominent stroke symptoms are speech disorders such as Aphasia (acquired language disorder caused by a stroke or trauma), Apraxia (acquired articulation disorder cause by a stroke or trauma) or Dysarthria (motor speech disorder).
  • Patent application WO 02/39423 Al describes an automated computer based speech disorder therapy system based on a dialog with the patient.
  • the system is especially suited to train patients with a known speech disorder, such as stuttering.
  • the speech received from the patient is fed back to the patient in order to enhance his speech performance.
  • Patent application US 2004/0044273 Al proposes a system with a bilateral interlace, which examines the user for motor deficits by a variety of tests.
  • the system may also analyze visual recognition of objects and the analysis of speech is also mentioned.
  • the method and apparatus can also be used in evaluating a progress and relapse for patients under rehabilitation.
  • the invention provides method of identifying a change in speech of a person, the method comprising the steps of requesting a predefined speech input from the person, receiving an oral response from the person, performing a language analysis on the received oral response, comparing a result of the language analysis with a result of a corresponding previously obtained language analysis for the person, and detecting a change in speech from the person based on the step of comparing.
  • the step of comparing the language analysis result with previous language analysis results it is possible to provide a simple measure of a possible disorder, and it is possible to compare with language analysis results obtained for persons or group of persons that suffer from specific speech disorders.
  • the previous language analysis results that are used for the comparison may comprise one or more of the following: a) language analysis results obtained for the person under test, such as the latest performed language analysis test results, which will enable precisely track a development for the person rather than just detect a speech disorder or not.
  • the method can also function in a rehabilitation situation where it is not only the task to detect a speech disorder but rather to detect if the person with a known disorder has made a progress or has relapsed.
  • non- personalized data d) language analysis results obtained for a group of persons with a known disorder, such as a library with non-personalized data related to groups of persons with specific speech disorders, and e) static threshold values for certain parameters related to results of the language analysis, these threshold values being determined by a skilled professional within speech disorders.
  • any type of analysis performed in order to analyze the speech with respect to characterize specific aspects of the person's speech It is not necessary to include the step of performing speech recognition since a predefined speech input is requested, and thus the words expected to be received are known in advance. Rather, the analysis refers to the process of identifying specific parts of the received oral response that are essential in relation to detecting speech disorders. Speech recognition may be used, for example in order to test the received oral input for compliance with the expected, predefined speech.
  • the detected change in speech may be a simple yes/no changed speech.
  • the result may also be a graduated classification of the expected disorder, and it may be a classification pointing out an expected specific type of disorder, e.g. Aphasia, Apraxia or Dysarthria, but also Parkinson episodes and other motor-neurological disorders may be derived.
  • the detected change in speech may also comprise a scalar value indicating the severity of the detected speech disorder that can be used to evaluate a possible progress or relapse in a known speech disorder.
  • the speech change may comprise an indicator of whether the change has become more severe or if the person has improved his speech performance, such as may be used in a rehabilitation situation.
  • the step of requesting may comprise an acoustic request, such as a voice presented via a loudspeaker asking the patient to pronounce a predefined sentence. It may also comprise a visual request, such as a written sentence presented on a display. Such visual request may also be accompanied by drawings, photos and/or symbols.
  • the step of requesting may comprise a combination of both an acoustic and a visual request.
  • a subsequent stroke analysis may then be based on an overall evaluation of the different types of expected disorders, or the stroke analysis may based on only one of these disorders.
  • the step of performing comprises aligning the received oral response to a pattern of the predefined speech, i.e. the expected words, the aligning process comprising such as aligning phonemes.
  • the aligning process comprises extracting statistics related to phonemes in the received oral response.
  • it comprises extracting a speaking rate for the received oral response.
  • it comprises all of the three aforementioned steps.
  • the step of performing comprises an initial step of checking if the received oral response provides a suitable match with the predefined speech, i.e. if the speech output from the person comprises the expected words in the predefined speech. If a poor match is detected, the steps of requesting and receiving may be repeated if there are indications for patient non-compliance due to non-attention: e.g. if the patient has responded to similar request in the same session earlier. However, if a poor match is detected and if it can be derived from the oral response that the patient complied with the request, it can be assumed that the person possibly suffers from Aphasia.
  • the initial step may comprise extracting a length of the received oral response and comparing the length to an expected predefined maximal length.
  • a maximal length can be defined corresponding to an expected maximum length of the oral response.
  • From an extracted length of the oral response it is possible to determine if the oral response does not comply with the requested input, i.e. if the extracted length of the oral response exceeds the predefined maximal length.
  • patient compliance may be measured by comparing the expected duration of an acoustic response to the duration of the actual response. If the patient starts to articulate upon request but continuous past a certain time interval, which is depended on the request, Aphasia may be assumed. Other characteristics, which indicate Aphasia, are long pauses in the oral articulation. Such evaluation does not require the recognition of words.
  • the step of performing comprises aligning the received oral response to a pattern of the expected predefined speech, extracting statistics related to phonemes in the received oral response, extracting a speaking rate for the received oral response together with an initial step of checking if the received oral response provides a suitable match with the predefined speech.
  • the detected change in speech may be communicated to the person by acoustic means, such as a voice presenting the change, or the result may be presented by visual means, such as in a written sentence and/or using one or more symbols and/or using a visual scale or meter indicating the severity of changed speech.
  • acoustic means such as a voice presenting the change
  • visual means such as in a written sentence and/or using one or more symbols and/or using a visual scale or meter indicating the severity of changed speech.
  • the method can be implemented as an automated dialog with a person using a computer device with known input means such as a microphone and with known output means such as a display and/or a loudspeaker.
  • the method can be implemented with existing low-cost equipment, e.g. a Personal Computer, a Laptop Computer, or a Personal Digital Assistant (PDA) with appropriate software.
  • PDA Personal Digital Assistant
  • the method may also be included in diagnosis equipment that can also be implemented with low-cost hardware.
  • the method may be started by pressing a button, or it may be voice activated.
  • the invention provides an apparatus comprising: requesting means adapted to request a predefined speech input from a person, acoustic receiving means adapted to receive an oral response from the person, signal processing means adapted to perform a language analysis on the received oral response, to compare a result of the language analysis with a result of a corresponding previously obtained language analysis for the person, and to detect a change in speech based on said comparison.
  • the apparatus may further comprise communication means adapted to communicate the detected change in speech via a communication channel selected from the group consisting of: telephone net, Internet, mobile telephone net, Local Area Network, and near field communication techniques (e.g. Blue tooth).
  • a communication channel selected from the group consisting of: telephone net, Internet, mobile telephone net, Local Area Network, and near field communication techniques (e.g. Blue tooth).
  • the apparatus comprises stroke analysis means adapted to perform a stroke analysis based on the detected change in speech.
  • the stroke analysis may be based on additional parameters, such as inputs regarding the person's motor performance, e.g. vision data from a camera monitoring the person.
  • the apparatus may be a dedicated speech analysis device.
  • the apparatus may also comprise further analysis means.
  • Such further analysis means may comprise a camera so as to be able to film the person performing a motor task.
  • the apparatus is preferably adapted to perform a stroke analysis based on the detected change in speech.
  • a stroke analysis may be based on an overall evaluation of the changed speech and at least one of the further results.
  • the apparatus comprises storing means for storing the previously obtained language analysis results, i.e. data according to the above description, a)-d).
  • These historical analysis results may be stored in a memory or on a hard disk, or the apparatus may be adapted to retrieve such data from an external storing device, such as via a communication link, such as a telephone line, an Internet connection etc.
  • the apparatus may comprise means for automatically performing a test in a dialog with the person under test.
  • the apparatus comprises automatic adaptation means that automatically adapt the test to individual abilities or disabilities of the person.
  • the person may not be able to read, and thus the apparatus thus adapts to this and communicates with the person by speech messages via a loudspeaker instead of written messages on a display.
  • the knowledge about the disabilities or impairments of a user may either be supplied by a medical professional by means of database access or alike, or the knowledge may be build up during a session. In this case, the disability may be caused by stroke. Nevertheless, the sequence of test methods and further use input/output modalities is changed accordingly to ensure that the patient is able to perceive information.
  • the apparatus may comprise means for adapting the test sequence to previous test results, i.e. intelligent adaptation, e.g. the test sequence may be adapted to the results of the last test performed.
  • the invention provides a computer system adapted to perform the method according to the first aspect.
  • the computer system may comprise a Personal Computer, e.g. a Laptop computer, e.g. a Laptop computer with built-in microphone, display and loudspeaker.
  • a Personal Computer e.g. a Laptop computer, e.g. a Laptop computer with built-in microphone, display and loudspeaker.
  • An alternative implementation is a set-top box connected to a TV set, where the person can respond using a remote control.
  • the invention provides a computer executable program code adapted to perform the method according to the first aspect.
  • the invention provides a computer readable storage medium comprising a computer executable program code according to the fourth aspect.
  • the storage medium may be a hard disk, a floppy disk, a CD, a DVD, an SD card, a memory stick, a memory chip etc.
  • Fig. 1 shows a sketch of the principle of an automated dialog with a person
  • Fig. 2 shows a block diagram of essential part of a preferred apparatus according to the invention.
  • Fig. 3 shows a block diagram of steps in a preferred embodiment of the method
  • Fig. 4 shows a block diagram of steps in another preferred embodiment of the method.
  • Fig. 1 shows a block diagram illustrating the basic principle of a preferred apparatus DA according to the invention, based on an automated dialog with a person P.
  • Person P is subject to undergo a speech disorder test with the purpose of having a speech disorder result SDR as a result.
  • a test may be initiated by pressing a button, or the person P may orally request a test.
  • the test may also be initiated by remote control, such as controlled, by medical personal via telephone net (PSTN), the Internet or the like, in case the person P and the apparatus DA are located e.g. in the person's home and thus far away from medical personal.
  • PSTN telephone net
  • the apparatus DA may be programmed to start a test at a predefined time, such as by regular intervals in the course of a day, and the person P may then be by informed that it is time to a test by an alarm signal.
  • the core of the speech disorder test is that the apparatus DA communicates a request 1 to the person P to produce a predefined speech, and the oral response 2 from the person P is then received and processed by the apparatus DA.
  • the request 1 may be presented visually on a display and/or acoustically via a loudspeaker.
  • a test comprises a number of requests 1 and oral responses 2, and preferably at least some of the oral responses 2 comprise sentences of several words spoken by the person P in order to contain an appropriate amount of test samples to allow a reliable speech disorder result SDR.
  • possible requests 1 examples are:
  • Fig. 2 shows in block diagram from a preferred embodiment of the apparatus DA of Fig. 1, an embodiment that can be implemented with low cost components and thus serve as home-use equipment.
  • a processing unit PU is connected with input and output means: a microphone 10 that can receive oral responses from the person P, a display device 11 and a loudspeaker 12 which can both serve to present messages to the person, e.g. requests for oral responses and the final diagnosis.
  • Further input and output means connected to the processing unit PU may be means for performing external communication with a telephone line via (PSTN), via a mobile telephone net such as GSM, or via a connection to the Internet.
  • PSTN telephone line via
  • GSM mobile telephone net
  • Such external communication line may be used to deliver speech disorder results and/or stroke analysis to a remotely located receiver.
  • the processing unit PU may also be controlled via such communication line. i.e. it may be programmed; start/stop of a test may be controlled.
  • the communication line may also be used for a dialog between the person P and e.g. medical personal.
  • the person P may use a remote control, game-pad device, a keyboard, a joystick, a Blue tooth connected mobile phone, etc. connected to the processing unit PU to communicate with the apparatus DA.
  • a camera may also be connected to the processing unit PU so as to allow the apparatus DA to receive a visual representation of the person P.
  • the apparatus DA can then supplement the speech disorder test with a test of other motor abilities of the person P, such as by requesting the person P to perform movements of arms and legs etc. A more precise stroke analysis may be obtained with such enhancement.
  • the processing unit PU controls the dialog with the person P using the input and output means 10, 11, 12, and the processing unit PU processes the speech received from the person P via the microphone 10. Based on the processing of the speech input, the processing unit PU produces a speech disorder result SDR and optionally a stroke analysis based thereon. One of or both of these are then presented either on the display 11 or presented in spoken language via the loudspeaker 12.
  • the processing unit PU comprises computer means, i.e. comprising means for performing signal processing on the speech input.
  • the processing unit PU may be formed as a dedicated speech disorder/stroke device, e.g. with the necessary software permanently stored in a chip, or it may be formed by a general-purpose computer.
  • the apparatus DA is implemented as an automated dialog system using a Personal Computer (PC) with microphone 10, display 11 and loudspeaker 12 connected thereto. It may be a Laptop PC in which microphone 10, display 11 and loudspeaker 12 are integrated.
  • the dialog control and speech signal processing method are implemented in software. The speech signal processing method is described in the following.
  • the apparatus DA may as well be embedded in a dedicated stroke test device, which also performs other tests on the person P, e.g. motor tests using a camera.
  • the method may be run and controlled from a remotely based server connected via the Internet, via a telephone line or via a mobile telephone line, to a device, such as a PC, that merely handles data to and from the input/output means 10, 11, 12.
  • an automatic call for medical help is provided.
  • Such call may be achieved via the apparatus DA performs a pre-defined alarm call to medical personal via a telephone net or sends a message via the Internet.
  • Fig. 3 shows a block diagram with steps of a preferred speech signal processing method according to the invention.
  • a test as described in the foregoing is intended to produce a speech input SI of known content, i.e. a known sequence of words.
  • this speech input SI can be aligned to the expected speech pattern to identify words and phonemes.
  • the compliance of the speech input by the user with the given known content might be quantified.
  • a statistics on phoneme length can be used for evaluation of speech disorders when compared to previous results. This can be particularly helpful, when the acquired statistics can be compared to data from the same patient in an earlier (healthy) state.
  • This scenario makes the device useful for use during rehabilitation, as well as in a home setting for regular testing, e.g. of patients after rehabilitation.
  • the speech input SI is subject to a language analysis LA, i.e. including speech recognition, that results in a speech disorder classification CL based on previous analysis results PAR, and this classification CL that can be used to perform a speech disorder result SDR, and thus optionally a stroke analysis. It is in the classification CL process that the previous language analysis results are taken into account.
  • the language analysis LA process will be described in details in the following.
  • the speech input SI referred to is a representation of the oral response received from the person P, i.e. a data representation of an acoustic signal, e.g.
  • the language analysis LA preferably comprises a forced alignment FA performed on the speech input SI. Since the expected input is known in advance, since the person has been requested to pronounce specific words, a rating for the "closeness" of the speech input SI from the person P and the expected input is relevant for the analysis process based on the expected input. Two preferred variants of the forced alignment FA step will be described: a first one performed during analysis, and a second one performed after analysis.
  • the speech input SI signal is processed sequentially in temporal order.
  • the internal model of the speech analysis is utilized to compute if a certain phoneme (one might also think of other basic units) is detected, i.e. "recognized".
  • state-of-the art speech analysis or recognition systems use stochastic models, therefore not only one variant is considered but a wide range of candidate phonemes are obtained in parallel, generally all carrying a rating on the quality (e.g. some probability).
  • the next unit from the speech input SI is evaluated. This process builds up a large tree-like structure containing a variety of alternative recognition results for the speech input SI.
  • a (probability) model is used to rate all these different paths from the root to the leafs of the tree.
  • the known text input can now be used to restrict the search space (the "tree"). Instead of allowing all possible phonemes to occur at every time, only the sequences of phonemes, which comply, with the expected word sequence are allowed - even if the real speech input SI does not match the expected word sequence, it will be mapped onto phoneme sequences according to the expected input. Again the underlying (probability) models are used to compute a rating for the acoustic input given the expected input. A high rating indicates a "near-perfect" match; a low rating indicates mismatch between the speech input SI and the expected input.
  • each candidate is a sequence of words.
  • Each candidate can now be compared to the expected input, using e.g. equal number of words, number of equal words, number of different words, additionally inserted words, missing words, ... From this comparison (together with the rating from the recognition, if available) the "optimal" candidate from the set of candidates compared to the expected input can be chosen, or all candidates might be rejected. From the candidates computed in the forced alignment step FA, the best candidate according to an underlying rating is selected.
  • a format convert FC may be used to prepare the selected candidate for further processing by the next steps to obtain "features" from the selected recognition result.
  • the result after format conversion FC is applied to a speaking rate SRA and a phoneme statistics step PS.
  • additional information from the analysis process can be extracted from the speech input SI for the candidate under consideration.
  • This might include information on the phoneme sequence used to build up the candidate and a relation to the time-slots of the speech input SI that have been used to recognize a certain phoneme.
  • the temporal information can then be used to determine the speaking rate SRA of the patient, i.e. the speed of his/her speech (which might be normalized for each phoneme).
  • the result is preferably used for comparison with a speech rate of the patient obtained in previous utterances, such as in a healthy state of the patient, or in another known state of the patient.
  • an adaptation (personalization) of the system towards the patient over time can be obtained.
  • Statistics on properties of the phonemes might be used to find indications on non-regular speech, e.g. the mean length of certain phonemes, or the distribution which phonemes are used how often by the patient. Again these analyses can be compared to results from a non-related comparison group or compared to results collected from the patient, in the latter case the adaptation can be continued over time. Phonemes extracted from the input might be compared to phonemes obtained in previous interactions with the user to detect changes during time.
  • a classification CL is carried out, and a speech disorder result SDR is produced.
  • the above-mentioned parameters are compared to previous analysis results PAR, and from this comparison a result is derived.
  • a result is derived.
  • a person's previously recorded phoneme lengths are much shorter than present ones, this indicates that the person's speech has changed significantly, and it if is known that the person has a large risk of stroke, such significant prolonged phoneme lengths may be a strong indicator of a stroke.
  • a iundamental classification CL would be to distinguish between distorted and non-distorted speech. In the case of distorted speech, further classifications might lead to a more detailed reason for the distortion. The comparison may be based on similarity measures.
  • Fig. 4 shows the processing steps LA, CL, and SDD as explained in relation to Fig. 3. In order to enhance the precision of the classification mechanism, however in Fig. 4 additional steps are included before steps LA in order to exclude speech input SI, which do not comply with the predefined requested speech.
  • An initial speech recognition step SR has been included to test the speech input SI.
  • This speech recognition step SR may comprises a forced alignment such as described in connection with step FA of Fig. 3 together with an appropriate classifier.
  • the result of the speech recognition SR may be a single text sequence or a set of alternative candidates (e.g. represented in a n-best list or graph). These candidates might again contain further information, which was extracted during the recognition, e.g. ratings or temporal information on phonemes with respect to the speech input SI.
  • step EI the SR outcome is considered to be in compliance with an expected input, i.e. if further processing is reasonable and required, i.e. irregularities in speech are detected, or the input cannot be processed further.
  • the reason for stopping the processing can be the lack of dysfunctions or indications that the acoustic input is not reliable, thus the patient might be prompted to re-read the given sequence.
  • a confidence check CC can be performed to evaluate if the input was given with clear speech (but wrong words spoken) or with highly perturbed pronunciation. This can be used to decide whether re-input of the utterance is required etc.
  • the forced alignment FA step of the language analysis LA of Fig. 3 may be omitted, since such forced alignment might already have been performed in the SR step. If not, a forced alignment FA might be performed to support further analysis.
  • A, B and C can to some extent be handled by adding an additional dialog: also maybe using another type of interfacing.
  • the stored speech disorder result data for the person may be adapted to new speech disorder result data obtained, such as if the test has been performed and no stroke has been detected.

Abstract

L'invention concerne un procédé pour détecter un trouble du langage, ce procédé pouvant être utilisé comme un dialogue automatique avec une personne susceptible de souffrir d'une modification du langage, par exemple d'aphasie, de dysarthrie ou d'apraxie. Une analyse d'accident vasculaire cérébral est de préférence réalisée sur la base de la modification du langage. Le procédé de l'invention comprend l'analyse du langage sur la base d'un langage parlé prédéfini émis par la personne testée. L'analyse du langage comprend de préférence une analyse statistique de la longueur des phonèmes et du débit de parole. Les résultats de cette analyse de langage sont comparés avec des résultats d'une analyse de langage préalablement obtenus auprès de la personne testée, par exemple lorsque la personne était en bonne santé ou dans un autre état de santé connu. Les résultats de l'analyse préalable peuvent également comprendre des données non individuelles telles que des données obtenues auprès d'un groupe de personnes. Si les données de l'historique personnel sont utilisées, on peut effectuer le suivi du développement des changements intervenus dans le langage, par exemple durant la réhabilitation. Les changements du langage sont alors basés sur la comparaison des résultats de l'analyse présente avec les résultats préalables. Des données stockées peuvent être adaptées aux nouvelles données si aucun accident vasculaire cérébral n'a été détecté. Ce procédé est apte à être appliqué sur des équipements à faibles coûts, donc pour une utilisation domestique, par exemple pour des personnes en cours de réhabilitation. Dans un mode de réalisation préféré, l'appareil peut être utilisé sur un PC autonome tel qu'un ordinateur portable, sur un boîtier décodeur relié à un téléviseur, sur un dispositif autonome spécialisé ou sur un PC commandé par un serveur au moyen d'une connexion Internet. Un tel appareil exécute un dialogue au moyen d'un microphone, d'un haut-parleur et d'un affichage et il peut adapter la séquence de test à la réponse de la personne.
PCT/IB2006/051144 2005-04-13 2006-04-13 Procede et dispositif de detection automatique de troubles du langage WO2006109268A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05102909.8 2005-04-13
EP05102909 2005-04-13

Publications (1)

Publication Number Publication Date
WO2006109268A1 true WO2006109268A1 (fr) 2006-10-19

Family

ID=36616823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/051144 WO2006109268A1 (fr) 2005-04-13 2006-04-13 Procede et dispositif de detection automatique de troubles du langage

Country Status (1)

Country Link
WO (1) WO2006109268A1 (fr)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100298649A1 (en) * 2007-11-02 2010-11-25 Siegbert Warkentin System and methods for assessment of the aging brain and its brain disease induced brain dysfunctions by speech analysis
WO2014042878A1 (fr) * 2012-09-12 2014-03-20 Lingraphicare America Incorporated Procédé, système et appareil pour traiter un problème de communication
WO2014062441A1 (fr) * 2012-10-16 2014-04-24 University Of Florida Research Foundation, Inc. Dépistage de maladie neurologique utilisant les caractéristiques d'articulation de la parole
WO2014188408A1 (fr) * 2013-05-20 2014-11-27 Beyond Verbal Communication Ltd Méthode et système permettant de déterminer un état de défaillance multisystémique au moyen d'une analyse vocale intégrée dans le temps
CN107111672A (zh) * 2014-11-17 2017-08-29 埃尔瓦有限公司 使用从患者环境中被动捕获的语音模式来监测治疗遵从性
FR3051280A1 (fr) * 2016-05-12 2017-11-17 Paris Sciences Et Lettres - Quartier Latin Dispositif de cotation des troubles acquis du langage et procede de mise en oeuvre dudit dispositif
CN107456208A (zh) * 2016-06-02 2017-12-12 深圳先进技术研究院 多模式交互的言语语言功能障碍评估系统与方法
WO2018102579A1 (fr) * 2016-12-02 2018-06-07 Cardiac Pacemakers, Inc. Détection d'accident vasculaire cérébral à capteurs multiples
US20190221317A1 (en) * 2018-01-12 2019-07-18 Koninklijke Philips N.V. System and method for providing model-based treatment recommendation via individual-specific machine learning models
US10430557B2 (en) 2014-11-17 2019-10-01 Elwha Llc Monitoring treatment compliance using patient activity patterns
CN110415783A (zh) * 2018-04-26 2019-11-05 北京新海樱科技有限公司 一种基于体感的作业疗法康复方法
CN110720124A (zh) * 2017-05-31 2020-01-21 国际商业机器公司 监测患者语言的使用以识别潜在的言语和相关的神经障碍
CN111276130A (zh) * 2020-01-21 2020-06-12 河南优德医疗设备股份有限公司 计算机语言认识教育系统的mfcc倒谱系数计算方法
US10796715B1 (en) 2016-09-01 2020-10-06 Arizona Board Of Regents On Behalf Of Arizona State University Speech analysis algorithmic system and method for objective evaluation and/or disease detection
US11139079B2 (en) 2017-03-06 2021-10-05 International Business Machines Corporation Cognitive stroke detection and notification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089246A2 (fr) * 1999-10-01 2001-04-04 Siemens Aktiengesellschaft Procédé et appareil pour la thérapie orthophonique
WO2002059856A2 (fr) * 2001-01-25 2002-08-01 The Psychological Corporation Systeme et procede d'orthoponie, de transcription et d'analyse de la parole
US20040044273A1 (en) * 2002-08-31 2004-03-04 Keith Peter Trexler Stroke symptom recognition devices and methods
WO2004034355A2 (fr) * 2002-10-07 2004-04-22 Carnegie Mellon University Systeme et procede de comparaison d'elements
US20040230430A1 (en) * 2003-05-14 2004-11-18 Gupta Sunil K. Automatic assessment of phonological processes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089246A2 (fr) * 1999-10-01 2001-04-04 Siemens Aktiengesellschaft Procédé et appareil pour la thérapie orthophonique
WO2002059856A2 (fr) * 2001-01-25 2002-08-01 The Psychological Corporation Systeme et procede d'orthoponie, de transcription et d'analyse de la parole
US20040044273A1 (en) * 2002-08-31 2004-03-04 Keith Peter Trexler Stroke symptom recognition devices and methods
WO2004034355A2 (fr) * 2002-10-07 2004-04-22 Carnegie Mellon University Systeme et procede de comparaison d'elements
US20040230430A1 (en) * 2003-05-14 2004-11-18 Gupta Sunil K. Automatic assessment of phonological processes

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100298649A1 (en) * 2007-11-02 2010-11-25 Siegbert Warkentin System and methods for assessment of the aging brain and its brain disease induced brain dysfunctions by speech analysis
WO2014042878A1 (fr) * 2012-09-12 2014-03-20 Lingraphicare America Incorporated Procédé, système et appareil pour traiter un problème de communication
US10010288B2 (en) 2012-10-16 2018-07-03 Board Of Trustees Of Michigan State University Screening for neurological disease using speech articulation characteristics
WO2014062441A1 (fr) * 2012-10-16 2014-04-24 University Of Florida Research Foundation, Inc. Dépistage de maladie neurologique utilisant les caractéristiques d'articulation de la parole
US9579056B2 (en) 2012-10-16 2017-02-28 University Of Florida Research Foundation, Incorporated Screening for neurological disease using speech articulation characteristics
WO2014188408A1 (fr) * 2013-05-20 2014-11-27 Beyond Verbal Communication Ltd Méthode et système permettant de déterminer un état de défaillance multisystémique au moyen d'une analyse vocale intégrée dans le temps
CN107111672A (zh) * 2014-11-17 2017-08-29 埃尔瓦有限公司 使用从患者环境中被动捕获的语音模式来监测治疗遵从性
US10430557B2 (en) 2014-11-17 2019-10-01 Elwha Llc Monitoring treatment compliance using patient activity patterns
EP3221839A4 (fr) * 2014-11-17 2018-05-16 Elwha LLC Surveillance de conformité de traitement à l'aide de formes de parole capturées passivement à partir d'un environnement de patient
FR3051280A1 (fr) * 2016-05-12 2017-11-17 Paris Sciences Et Lettres - Quartier Latin Dispositif de cotation des troubles acquis du langage et procede de mise en oeuvre dudit dispositif
CN107456208A (zh) * 2016-06-02 2017-12-12 深圳先进技术研究院 多模式交互的言语语言功能障碍评估系统与方法
US10796715B1 (en) 2016-09-01 2020-10-06 Arizona Board Of Regents On Behalf Of Arizona State University Speech analysis algorithmic system and method for objective evaluation and/or disease detection
WO2018102579A1 (fr) * 2016-12-02 2018-06-07 Cardiac Pacemakers, Inc. Détection d'accident vasculaire cérébral à capteurs multiples
US11139079B2 (en) 2017-03-06 2021-10-05 International Business Machines Corporation Cognitive stroke detection and notification
CN110720124A (zh) * 2017-05-31 2020-01-21 国际商业机器公司 监测患者语言的使用以识别潜在的言语和相关的神经障碍
CN110720124B (zh) * 2017-05-31 2023-08-11 国际商业机器公司 监测患者语言的使用以识别潜在的言语和相关的神经障碍
US20190221317A1 (en) * 2018-01-12 2019-07-18 Koninklijke Philips N.V. System and method for providing model-based treatment recommendation via individual-specific machine learning models
US10896763B2 (en) * 2018-01-12 2021-01-19 Koninklijke Philips N.V. System and method for providing model-based treatment recommendation via individual-specific machine learning models
CN110415783A (zh) * 2018-04-26 2019-11-05 北京新海樱科技有限公司 一种基于体感的作业疗法康复方法
CN111276130A (zh) * 2020-01-21 2020-06-12 河南优德医疗设备股份有限公司 计算机语言认识教育系统的mfcc倒谱系数计算方法

Similar Documents

Publication Publication Date Title
WO2006109268A1 (fr) Procede et dispositif de detection automatique de troubles du langage
US10010288B2 (en) Screening for neurological disease using speech articulation characteristics
US10478111B2 (en) Systems for speech-based assessment of a patient's state-of-mind
JP4002401B2 (ja) 被験者能力測定システムおよび被験者能力測定方法
US8200494B2 (en) Speaker intent analysis system
US9508268B2 (en) System and method of training a dysarthric speaker
McKechnie et al. Automated speech analysis tools for children’s speech production: A systematic literature review
Bunnell et al. STAR: articulation training for young children.
TWI665657B (zh) 認知功能評估裝置、認知功能評估系統、認知功能評估方法及記錄媒體
EP3899938B1 (fr) Détection automatique de déficience neurocognitive sur la base d'un échantillon vocal
US10789966B2 (en) Method for evaluating a quality of voice onset of a speaker
Bone et al. Classifying language-related developmental disorders from speech cues: the promise and the potential confounds.
TW201913648A (zh) 認知功能評估裝置、認知功能評估系統、認知功能評估方法及程式
KR20220048381A (ko) 말 장애 평가 장치, 방법 및 프로그램
JP4631464B2 (ja) 体調判定装置およびそのプログラム
JP7022921B2 (ja) 認知機能評価装置、認知機能評価システム、認知機能評価方法及びプログラム
Gong et al. Towards an Automated Screening Tool for Developmental Speech and Language Impairments.
WO2002071390A1 (fr) Systeme d'evaluation de l'intelligibilite d'une langue parlee
JP7307507B2 (ja) 病態解析システム、病態解析装置、病態解析方法、及び病態解析プログラム
McKechnie Exploring the use of technology for assessment and intensive treatment of childhood apraxia of speech
Middag et al. DIA: a tool for objective intelligibility assessment of pathological speech.
JP7479013B2 (ja) 認知機能の判定方法、プログラム、認知機能の判定システム
CN116705070B (zh) 一种唇腭裂术后说话发音及鼻音矫正方法及系统
CN116189668B (zh) 语音分类、认知障碍检测方法、装置、设备及介质
Pompili et al. Speech and language technologies for the automatic monitoring and training of cognitive functions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06727912

Country of ref document: EP

Kind code of ref document: A1