DE202022103064U1 - Machine learning implemented device to detect the disease through speech-based language processing - Google Patents
Machine learning implemented device to detect the disease through speech-based language processing Download PDFInfo
- Publication number
- DE202022103064U1 DE202022103064U1 DE202022103064.2U DE202022103064U DE202022103064U1 DE 202022103064 U1 DE202022103064 U1 DE 202022103064U1 DE 202022103064 U DE202022103064 U DE 202022103064U DE 202022103064 U1 DE202022103064 U1 DE 202022103064U1
- Authority
- DE
- Germany
- Prior art keywords
- speech
- disease
- semantics
- unit
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 201000010099 disease Diseases 0.000 title claims abstract description 10
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 10
- 238000010801 machine learning Methods 0.000 title claims abstract description 9
- 208000024827 Alzheimer disease Diseases 0.000 claims abstract description 3
- 239000000284 extract Substances 0.000 claims abstract description 3
- 238000000034 method Methods 0.000 description 11
- 206010038583 Repetitive speech Diseases 0.000 description 4
- 206010003805 Autism Diseases 0.000 description 2
- 208000020706 Autistic disease Diseases 0.000 description 2
- 208000024714 major depressive disease Diseases 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- BUHVIAUBTBOHAG-FOYDDCNASA-N (2r,3r,4s,5r)-2-[6-[[2-(3,5-dimethoxyphenyl)-2-(2-methylphenyl)ethyl]amino]purin-9-yl]-5-(hydroxymethyl)oxolane-3,4-diol Chemical compound COC1=CC(OC)=CC(C(CNC=2C=3N=CN(C=3N=CN=2)[C@H]2[C@@H]([C@H](O)[C@@H](CO)O2)O)C=2C(=CC=CC=2)C)=C1 BUHVIAUBTBOHAG-FOYDDCNASA-N 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4088—Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/682—Mouth, e.g., oral cavity; tongue; Lips; Teeth
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0204—Acoustic sensors
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
Abstract
Eine durch maschinelles Lernen implementierte Vorrichtung zum Erkennen der Krankheit unter Verwendung von sprachbasierter Sprachverarbeitung, umfassend
eine Steuereinheit, die aus einer Speichereinheit, einer NLP-Einheit, einer semantischen Identifizierungseinheit besteht, um die Krankheit zu erkennen,
wobei die gesammelten Sprachdaten von Patienten, die ein Mikrofon verwenden, in der Speichereinheit gespeichert werden, dann extrahiert die NLP-Einheit die Semantik der gesamten Sprachdaten, wobei die Zeitintervalle T1, T2 ... Tn der Sprachsemantik verglichen werden, wobei wenn die Semantik mit den Zeitintervallen T1, T2...Tn abgewichen ist, dann empfiehlt der MLDASP, dass der Patient die Alzheimer-Krankheit mit Prozentsatz haben kann.A device implemented by machine learning for detecting the disease using speech-based language processing, comprising
a control unit consisting of a storage unit, an NLP unit, a semantic identification unit to recognize the disease,
wherein the collected speech data from patients using a microphone is stored in the storage unit, then the NLP unit extracts the semantics of the whole speech data, comparing the time intervals T1, T2...Tn of the speech semantics, where if the semantics with deviated from the time intervals T1, T2...Tn, then the MLDASP recommends that the patient may have Alzheimer's disease with percentage.
Description
Technisches Gebiet der Erfindung:Technical field of the invention:
Die vorliegende Erfindung betrifft im Allgemeinen Computertechnik und insbesondere eine durch maschinelles Lernen implementierte Vorrichtung zum Erkennen der Krankheit unter Verwendung von sprachbasierter Sprachverarbeitung.The present invention relates generally to computing, and more particularly to a machine learning implemented apparatus for detecting the disease using speech-based language processing.
Hintergrund der Erfindung:Background of the invention:
Die nachstehenden Hintergrundinformationen beziehen sich auf die vorliegende Offenbarung, sind aber nicht notwendigerweise Stand der Technik.The following background information relates to the present disclosure but is not necessarily prior art.
Gegenstand der Erfindungsubject of the invention
Ein Ziel der vorliegenden Erfindung ist es, maschinell lernende und IoT-fähige intelligente Luftreiniger zum Prüfen der Qualität und Reinigen der Luft bereitzustellen.An aim of the present invention is to provide machine-learning and IoT-enabled smart air purifiers to check the quality and purify the air.
Diese und andere Aufgaben und Eigenschaften der vorliegenden Erfindung werden aus der weiteren Offenbarung ersichtlich, die in der unten gegebenen detaillierten Beschreibung zu machen ist.These and other objects and features of the present invention will become apparent from the further disclosure to be made in the detailed description given below.
Zusammenfassung der Erfindung:Summary of the invention:
Dementsprechend stellt die folgende Erfindung eine durch maschinelles Lernen implementierte Vorrichtung bereit, um die Krankheit unter Verwendung von sprachbasierter Sprachverarbeitung zu erkennen.Accordingly, the present invention provides an apparatus implemented by machine learning to detect disease using speech-based language processing.
Detaillierte Beschreibung der Erfindung:Detailed description of the invention:
Die Erfindung kann jedoch in vielen verschiedenen Formen verkörpert werden und sollte nicht als auf die hierin dargelegten Ausführungsformen beschränkt angesehen werden; vielmehr werden diese Ausführungsformen bereitgestellt, damit diese Erfindung gründlich und vollständig ist und ihren Umfang Fachleuten vollständig vermittelt.The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this invention will be thorough and complete, and will fully convey the scope of this invention to those skilled in the art.
Die vorliegende Erfindung betrifft eine durch maschinelles Lernen implementierte Vorrichtung zum Erkennen der Krankheit unter Verwendung von sprachbasierter Sprachverarbeitung.The present invention relates to a machine learning implemented apparatus for detecting the disease using speech-based language processing.
Die vorgeschlagene Erfindung hilft dem Arzt/Patienten, die Erfindung zu nutzen, um vorherzusagen, ob der Patient eine Krankheit in der Zukunft hat oder haben wird, indem automatisch auf maschinellem Lernen basierende IoT-Geräte verwendet werden.The proposed invention helps the doctor/patient to use the invention to predict whether the patient has or will have a disease in the future by automatically using machine learning based IoT devices.
Ein IoT-Gerät wird in der Nähe des Mundes eines Patienten befestigt. Das IoT-Gerät, eine Art drahtloses Mikrofon, sammelt regelmäßig die Sprachdaten des Patienten und überträgt sie an die Steuereinheit.An IoT device is attached near a patient's mouth. The IoT device, a kind of wireless microphone, periodically collects the patient's speech data and transmits it to the control unit.
Hierin besteht die Steuereinheit aus einer Speichereinheit, einer NLP-Einheit, einer semantischen Identifikationseinheit, um eine Krankheit unter Verwendung einer sprachbasierten Sprachverarbeitung des Patienten zu erkennen. Die gesammelten Sprachdaten von Patienten, die ein Mikrofon verwenden, werden in der Speichereinheit gespeichert. Dann extrahiert die NLP-Einheit die Semantik der gesamten Sprachdaten. Dann werden die Zeitintervalle T1, T2 ... Tn der Sprachsemantik verglichen. Wenn die Semantik mit den Zeitintervallen T1, T2...Tn abgewichen ist, dann empfiehlt der MLDASP, dass der Patient die Alzheimer-Krankheit mit Prozentsatz haben kann.Herein, the control unit consists of a storage unit, an NLP unit, a semantic identification unit to recognize a disease using speech-based speech processing of the patient. The collected speech data from patients using a microphone is stored in the storage unit. Then the NLP engine extracts the semantics of the entire speech data. Then the time intervals T1, T2 ... Tn of the speech semantics are compared. If the semantics differed with the time intervals T1, T2...Tn, then the MLDASP recommends that the patient may have Alzheimer's disease with percentage.
Durch die Verwendung dieser vorliegenden Erfindung kann der Arzt/Patient die Erfindung nutzen, um vorherzusagen, ob der Patient Alzheimer hat oder möglicherweise in Zukunft haben wird, indem automatisch auf maschinellem Lernen basierende IoT-Geräte verwendet werden.By using this present invention, the physician/patient can use the invention to predict whether the patient has or may have Alzheimer's in the future by automatically using machine learning based IoT devices.
ZITATE ENTHALTEN IN DER BESCHREIBUNGQUOTES INCLUDED IN DESCRIPTION
Diese Liste der vom Anmelder aufgeführten Dokumente wurde automatisiert erzeugt und ist ausschließlich zur besseren Information des Lesers aufgenommen. Die Liste ist nicht Bestandteil der deutschen Patent- bzw. Gebrauchsmusteranmeldung. Das DPMA übernimmt keinerlei Haftung für etwaige Fehler oder Auslassungen.This list of the documents cited by the applicant was generated automatically and is included solely for the better information of the reader. The list is not part of the German patent or utility model application. The DPMA assumes no liability for any errors or omissions.
Zitierte PatentliteraturPatent Literature Cited
- US 8938390 B2 [0003]US8938390B2 [0003]
- US 20180018985 A1 [0004]US20180018985A1 [0004]
- US 20150112232 A1 [0005]US20150112232A1 [0005]
- US 9576593 B2 [0006]US 9576593 B2 [0006]
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE202022103064.2U DE202022103064U1 (en) | 2022-05-31 | 2022-05-31 | Machine learning implemented device to detect the disease through speech-based language processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE202022103064.2U DE202022103064U1 (en) | 2022-05-31 | 2022-05-31 | Machine learning implemented device to detect the disease through speech-based language processing |
Publications (1)
Publication Number | Publication Date |
---|---|
DE202022103064U1 true DE202022103064U1 (en) | 2022-06-23 |
Family
ID=82402578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
DE202022103064.2U Active DE202022103064U1 (en) | 2022-05-31 | 2022-05-31 | Machine learning implemented device to detect the disease through speech-based language processing |
Country Status (1)
Country | Link |
---|---|
DE (1) | DE202022103064U1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8938390B2 (en) | 2007-01-23 | 2015-01-20 | Lena Foundation | System and method for expressive language and developmental disorder assessment |
US20150112232A1 (en) | 2013-10-20 | 2015-04-23 | Massachusetts Institute Of Technology | Using correlation structure of speech dynamics to detect neurological changes |
US9576593B2 (en) | 2012-03-15 | 2017-02-21 | Regents Of The University Of Minnesota | Automated verbal fluency assessment |
US20180018985A1 (en) | 2016-07-16 | 2018-01-18 | Ron Zass | System and method for detecting repetitive speech |
-
2022
- 2022-05-31 DE DE202022103064.2U patent/DE202022103064U1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8938390B2 (en) | 2007-01-23 | 2015-01-20 | Lena Foundation | System and method for expressive language and developmental disorder assessment |
US9576593B2 (en) | 2012-03-15 | 2017-02-21 | Regents Of The University Of Minnesota | Automated verbal fluency assessment |
US20150112232A1 (en) | 2013-10-20 | 2015-04-23 | Massachusetts Institute Of Technology | Using correlation structure of speech dynamics to detect neurological changes |
US20180018985A1 (en) | 2016-07-16 | 2018-01-18 | Ron Zass | System and method for detecting repetitive speech |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106725532B (en) | Depression automatic evaluation system and method based on phonetic feature and machine learning | |
Shao et al. | Predicting naming latencies for action pictures: Dutch norms | |
CN111145903A (en) | Method and device for acquiring vertigo inquiry text, electronic equipment and inquiry system | |
Yin et al. | Towards automatic cognitive load measurement from speech analysis | |
CN108646914A (en) | A kind of multi-modal affection data collection method and device | |
Km et al. | Comparison of multidimensional MFCC feature vectors for objective assessment of stuttered disfluencies | |
Warule et al. | Significance of voiced and unvoiced speech segments for the detection of common cold | |
Balogh et al. | The role of silence in verbal fluency tasks–a new approach for the detection of mild cognitive impairment | |
Selvakumari et al. | A voice activity detector using SVM and Naïve Bayes classification algorithm | |
DE202022103064U1 (en) | Machine learning implemented device to detect the disease through speech-based language processing | |
Gong et al. | Towards an Automated Screening Tool for Developmental Speech and Language Impairments. | |
Shinkawa et al. | Multimodal Behavior Analysis Towards Detecting Mild Cognitive Impairment: Preliminary Results on Gait and Speech. | |
McTear et al. | Affective conversational interfaces | |
Marck et al. | Identification, analysis and characterization of base units of bird vocal communication: The white spectacled bulbul (Pycnonotus xanthopygos) as a case study | |
Rheault et al. | Multimodal techniques for the study of a ect in political videos | |
Teixeira et al. | F0, LPC, and MFCC analysis for emotion recognition based on speech | |
Böck et al. | Audio-based pre-classification for semi-automatic facial expression coding | |
Deshpande et al. | Comparing manual and machine annotations of emotions in non-acted speech | |
Rodríguez-Hidalgo et al. | The robustness of echoic log-surprise auditory saliency detection | |
Grill et al. | Classification and Detection of Specific Language Impairments in Children Based on their Speech Skills | |
Xie et al. | Image processing and classification procedure for the analysis of australian frog vocalisations | |
Alimuradov et al. | Increasing detection efficiency of psycho-emotional disorders based on adaptive decomposition and cepstral analysis of speech signals | |
Pandey et al. | Speech Signal Analysis Using Hybrid Feature Extraction Technique for Parkinson’s Disease Prediction | |
Tao et al. | The relationship between speech features changes when you get depressed: Feature correlations for improving speed and performance of depression detection | |
Qu et al. | Depression recognition in university students based on speech features in social learning environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
R207 | Utility model specification | ||
R082 | Change of representative |
Representative=s name: LIPPERT STACHOW PATENTANWAELTE RECHTSANWAELTE , DE |