CN111462841B - Intelligent depression diagnosis device and system based on knowledge graph - Google Patents

Intelligent depression diagnosis device and system based on knowledge graph Download PDF

Info

Publication number
CN111462841B
CN111462841B CN202010170779.7A CN202010170779A CN111462841B CN 111462841 B CN111462841 B CN 111462841B CN 202010170779 A CN202010170779 A CN 202010170779A CN 111462841 B CN111462841 B CN 111462841B
Authority
CN
China
Prior art keywords
entity
data
model
knowledge graph
attribute value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010170779.7A
Other languages
Chinese (zh)
Other versions
CN111462841A (en
Inventor
何卷红
邢晓芬
徐向民
郭锴凌
田翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010170779.7A priority Critical patent/CN111462841B/en
Publication of CN111462841A publication Critical patent/CN111462841A/en
Application granted granted Critical
Publication of CN111462841B publication Critical patent/CN111462841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a knowledge-graph-based intelligent depression diagnosis device and a knowledge-graph-based intelligent depression diagnosis system, wherein the device comprises: the data acquisition module is used for acquiring human body data of a user; the human body data includes video data, audio data, brain electrical data, and heart rate data; the entity attribute value acquisition module is used for acquiring an entity and a corresponding entity attribute value from human body data by adopting a trained learning model; and the knowledge graph module is used for connecting the entity and the entity attribute value to form a knowledge graph so as to obtain a depression diagnosis result. The invention can intelligently output the diagnosis result of the depression and assist doctors in diagnosing the depression.

Description

Intelligent depression diagnosis device and system based on knowledge graph
Technical Field
The invention relates to the technical field of intelligent diagnosis devices for depression, in particular to an intelligent diagnosis device and system for depression based on a knowledge graph.
Background
Among psychological and mental diseases in different forms, people related to depression disorder are the most extensive, but at present, due to the fact that cognition on depression is deviated, a plurality of people are sensitive to treatment, the opportunity of treatment is missed, part of the drunken and courage enters patients in hospital diagnosis and treatment, and the diagnosis of depression is seriously dependent on the experience level of doctors, the levels of doctors in all places are uneven, doctors are seriously lack of life, and therefore part of patients cannot be diagnosed and treated in time. The inability to diagnose and treat depression in time may cause huge loss to society, so that the diagnosis of depression by doctors is an important research topic in the health field by using scientific means. The method has very important significance for improving the health level of people and the stability of society.
Knowledge graph nature is a semantic network whose nodes represent entities or concepts and edges represent semantic relationships between entities/concepts. The knowledge graph provides a better ability to organize and manage information. At present, the application of the knowledge graph in the medical field is mainly a question-answering system, and because of the complexity of diagnosis of mental diseases, the system is not suitable, so that a knowledge graph device and a system are needed to be designed so as to be suitable for diagnosis of depression.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention aims to provide an intelligent diagnosis device and system for depression based on a knowledge graph, which can intelligently output a diagnosis result of depression and assist doctors in diagnosing depression.
In order to achieve the above purpose, the invention is realized by the following technical scheme: an intelligent depression diagnosis device based on a knowledge graph is characterized in that: comprising the following steps:
the data acquisition module is used for acquiring human body data of a user; the human body data includes video data, audio data, brain electrical data, and heart rate data;
the entity attribute value acquisition module is used for acquiring an entity and a corresponding entity attribute value from human body data by adopting a learning model;
and the knowledge graph module is used for connecting the entity and the entity attribute value to form a knowledge graph so as to obtain a depression diagnosis result.
Preferably, in the entity attribute value obtaining module, the learning model includes an expression recognition model, an action recognition model, a dressing form recognition model, a speech speed intonation calculation model, a text analysis model, an emotion recognition model and a pressure classification model;
acquiring a picture sequence and pictures from video data; the entity obtained from the picture sequence by adopting the expression recognition model is expression; the entity obtained from the picture sequence by adopting the action recognition model comprises actions and reactions; the entity obtained from the picture by adopting the dressing form recognition model is the dressing form;
the entities obtained from the audio data by using the speech intonation calculation model comprise speech speed and intonation; the entity obtained from the audio data by adopting the text analysis model comprises semantic information; an entity obtained from the electroencephalogram data by adopting an emotion recognition model is emotion information; the entity obtained from the heart rate data by using the pressure classification model is pressure information.
Preferably, the expression recognition model, the action recognition model, the dressing form recognition model, the speech speed intonation calculation model and the text analysis model respectively adopt a convolutional neural network model or a cyclic neural network model;
the emotion recognition model and the pressure classification model adopt a machine learning model.
Preferably, in the entity attribute value obtaining module, an entity and a corresponding entity attribute value are obtained from the medical data through a natural language processing technology to train a learning model; in the knowledge graph module, the entity and the corresponding entity attribute value are obtained from the medical data through a natural language processing technology, and then the knowledge graph is constructed through the relation between the entities.
Preferably, the knowledge graph module adopts knowledge graphs to realize construction, extracts the association relation among all entities, carries out knowledge reasoning on the constructed knowledge graph to obtain a deeper entity relation, and further obtains an expanded knowledge graph; the constructed knowledge graph is then stored in the Neo4j graph database.
Preferably, the knowledge graph module also realizes iteration and perfection through a closed loop system.
A system comprising the knowledge-graph-based intelligent depression diagnosis device, which is characterized in that: comprising the following steps:
the client layer comprises a data acquisition module, a scale generation module for generating a response scale, a scale response module for triggering user emotion and filling the scale, a report module for displaying a depression diagnosis result and a labeling module for constructing a knowledge graph;
the data storage layer is used for storing the data and the knowledge graph transmitted by the client layer and transmitting the data to the data processing layer;
the data processing layer is used for preprocessing, extracting features and classifying the data collected by the received client to obtain each entity and corresponding entity attribute values in the knowledge graph, then calculating strength indexes among different nodes, and further constructing the knowledge graph to obtain a diagnosis result of depression; the entity attribute value acquisition module and the knowledge graph module are respectively positioned in the data processing layer.
Compared with the prior art, the invention has the following advantages and beneficial effects:
compared with the existing depression recognition device and system, the invention utilizes natural language processing technology to extract entities and entity attribute values in medical data, then establishes a learning model to calculate entity attribute values, obtains the entities and entity attribute values, calculates the relation among the entities to construct a knowledge graph so as to simulate doctor diagnosis process and realize intelligent and comprehensive depression diagnosis.
Drawings
Fig. 1 is a block diagram of a knowledge-graph-based intelligent depression diagnosis device of the present invention;
fig. 2 is a block diagram of the system of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
Example 1
The intelligent depression diagnosis device based on the knowledge graph of the embodiment, as shown in fig. 1, comprises a data acquisition module, an entity attribute value acquisition module and a knowledge graph module.
The data acquisition module is used for acquiring human body data of a user; the human body data includes video data, audio data, brain electrical data, and heart rate data. Collecting video data by using a camera, and collecting audio data by using a microphone; in order to judge the stress and emotion conditions, the multi-guide physiological instrument can be used for collecting the brain electrical data and heart rate data of the stimulus state. The video data is framed into a picture sequence which is respectively stored as a picture sequence and a picture, and other data is stored after being cleaned.
In the entity attribute value acquisition module, acquiring an entity and a corresponding entity attribute value from medical data through a natural language processing technology to train a learning model; in the knowledge graph module, the entity and the corresponding entity attribute value are obtained from the medical data through a natural language processing technology, and then the knowledge graph is constructed through the relation between the entities.
The entity attribute value acquisition module is used for acquiring an entity and a corresponding entity attribute value from human body data by adopting a learning model;
in the entity attribute value acquisition module, the learning model comprises an expression recognition model, an action recognition model, a dressing form recognition model, a speech speed intonation calculation model, a text analysis model, an emotion recognition model and a pressure classification model.
Acquiring a picture sequence and pictures from video data; the entity obtained from the picture sequence by adopting the expression recognition model is expression; the entity obtained from the picture sequence by adopting the action recognition model comprises actions and reactions; the entity obtained from the picture by adopting the dressing form recognition model is the dressing form;
the entities obtained from the audio data by using the speech intonation calculation model comprise speech speed and intonation; the entity obtained from the audio data by adopting the text analysis model comprises semantic information; an entity obtained from the electroencephalogram data by adopting an emotion recognition model is emotion information; the entity obtained from the heart rate data by using the pressure classification model is pressure information.
The entity and entity attribute values are as follows:
1) Dress form-sloppy, surprise, normal;
2) Speech speed-fast, slow, normal;
3) Intonation-high, low, normal;
4) Reaction-too fast, too slow, normal;
5) Expression- -anger, aversion, fear, happiness, sadness, surprise, neutrality, lacrimation;
6) Action-restlessness, rich language of limbs, little action and normal;
7) Pressure information- -neutral, pressure, pleasure;
8) Mood information-mood classification class.
The expression recognition model, the action recognition model, the dressing form recognition model, the speech speed intonation calculation model and the text analysis model respectively adopt a convolutional neural network model or a cyclic neural network model. The convolutional neural network model comprises a 2D convolutional neural network model and a 3D convolutional neural network model, the 2D convolutional neural network model is used for processing two-dimensional information such as pictures, and the 3D convolutional neural network model is used for processing three-dimensional information such as videos. The convolutional neural network model generally consists of a convolutional layer, a pooling layer and a full-connection layer, and LeNet, alexNet, vggNet, resNet is a common convolutional neural network model. The main function of the recurrent neural network model is to process and predict sequence data, which can memorize the previous information, the current output can be influenced by the previous output for a period of time, the recurrent neural network model is commonly used for voice processing, and the recurrent neural network model can also be used for video classification. The two deep network models, namely the convolutional neural network model and the cyclic neural network model, can finish two tasks of feature extraction and classification, so that the preprocessed data are directly input into the deep network model, and a classification result can be obtained by adjusting a super-parameter training network of the deep network model.
The current mainstream methods for processing signals such as pictures and videos are divided into two types, namely, manually extracting features, inputting the features into a classifier for classification, and directly inputting data into a deep network model for feature extraction and classification tasks to obtain classification results, wherein the deep neural network is rapidly developed in recent years and obtains better results than the traditional methods in the fields of computer vision, natural language processing and the like, so that a deep network model method is mainly adopted for establishing a picture and video analysis model.
The facial expression recognition and tear recognition based on the video are required to detect the face from the picture by using a face detection algorithm and cut and save the face picture sequence because the rich background contained in the picture can have negative influence on analysis. In deep learning, CNN is often used for extracting spatial features of images, RNN is often used for extracting temporal features because of its time sequence analysis capability, so that the features of both can be combined to extract temporal and spatial features, when a data set is small, a similar data set can be adopted to finely tune a CNN network trained by large-scale data, a finely tuned model is used for extracting spatial features, then a certain length of spatial features are input into the RNN network to extract spatial features, and finally classification is performed.
The attribute values related to the actions comprise restlessness, rich language of limbs, more small actions and the like, namely, the frequency of the actions is more focused, a depth network model based on the action recognition of the video is constructed, and the times of various actions in unit time are counted. The attribute values of the dressing form include, for example, sloppy, singular, normal, based on image analysis, only spatial information is included, so convolutional neural networks can be used for classification.
Intonation is the arrangement and change of the tone height and the emphasis in a sentence, and the meaning of the vocabulary and the meaning of the intonation of a sentence are calculated as the pitch characteristics in the audio with complete meaning, namely, the pitch information is contained, and the pitch information is obtained to obtain the intonation information.
The speech rate is the vocabulary rate that a human expresses a meaning of a language symbol presents per unit time. The recognition of the speech speed can be represented by the number of characters contained in a unit time from the beginning to the end of a section of speech, and the speech speed is considered to be slow when reaching a small threshold value and considered to be fast when reaching a large threshold value.
The voice recognition technology can convert voice signals into text signals, the training of the voice recognition model needs a great amount of data, under the condition of insufficient data quantity, the voice recognition can be carried out by using the disclosed voice recognition interface, and the obtained text information is extracted by using the natural language processing technology to carry out keyword extraction and semantic understanding.
The emotion recognition model and the pressure classification model adopt a machine learning model. Because the brain electricity data and the heart rate data contain less information, a machine learning model is adopted to obtain good classification effect. The treatment process can be divided into three steps: firstly, preprocessing data, and removing interference noise such as current in signals, other physiological signals and the like by adopting a conventional operation removing method to obtain relatively pure brain-computer signals and heart rate signals; then extracting characteristics, and extracting linear and nonlinear characteristics aiming at the characteristics of the brain electrical signals and heart rate signals by using a traditional signal processing method; and then inputting the features into common classifiers such as SVM, KNN or random forest and the like to obtain classification results.
The knowledge graph module is used for connecting the entity and the entity attribute value to form a knowledge graph, so that a depression diagnosis result is obtained, and iteration and perfection of the whole system are realized through a closed-loop system.
The knowledge graph module adopts a knowledge graph to realize construction, extracts the association relation among the entities, carries out knowledge reasoning on the constructed knowledge graph to obtain a deeper entity relation, and further obtains an expanded knowledge graph; the constructed knowledge graph is then stored in the Neo4j graph database.
A knowledge graph is a knowledge network that contains entities, entity attribute values, and relationships between entities. The expression of the knowledge graph is:
G=(E,R,S)
where e= { E1, E2, E3,..en } represents a set of entities, r= { R1, R2, R3,..rn } represents a set of relationships, S E x R x E represents a triplet of (entities, relationships, entities), and the links between the entities form a meshed knowledge structure.
The above module has established the calculation process of the entity and the entity attribute value, and the module uses the entity and the entity attribute value as the nodes in the knowledge graph, extracts the relationship between the entities by using the relationship reasoning model, knows the knowledge graph, and can obtain the conclusion from the knowledge graph through knowledge retrieval.
And storing the constructed knowledge graph in a Neo4j graph database. Neo4j is an embedded disk-based java persistence engine with complete transactional properties that can store structured data on a network (mathematically called a graph) rather than in a table, from which depression diagnosis results can be subsequently derived by knowledge retrieval. When new data are generated, the entity and entity attribute values can be expanded according to the voice keywords, the attribute calculation model is supplemented, the attribute calculation model is further trained by the new data, and the relation reasoning model is trained, so that the knowledge graph is updated, and the recognition rate of the depression knowledge graph is higher.
Example two
This embodiment describes a system including the knowledge-graph-based depression intelligent diagnosis apparatus of embodiment one, as shown in fig. 2, including a client layer, a data storage layer, and a data processing layer.
The client layer comprises a data acquisition module, a scale generation module for generating a response scale, a scale response module for triggering user emotion and filling the scale, a report module for displaying depression diagnosis results and a labeling module for constructing a knowledge graph.
The diagnosis process of the depression mainly depends on a doctor to judge the condition of a patient answering questions in a scale, so that a scale generation module is established, various scales are conveniently established and converted into an xml form, and the xml form is displayed to a user in a proper form in the scale answering module. The scale consists of the title of a track, and also contains the applicable contents of diseases, scale types and the like, wherein the title can contain questions, optional answers, corresponding audio and video data and the like. The scales are not limited to traditional mini scales, depression self-measuring scales, SCL90 scales and the like, but can be 'scales' composed of a plurality of different scenes or videos used in stress and emotion induction.
And the table answering module is used for displaying to a user by using a web browser. In the reporting module, different visual data information is displayed according to different user roles.
The data labeling module is mainly used for labeling the acquired signals, displaying the reasoning results of each model, and correcting the error results by professionals so as to facilitate further training of the model. The data labeling module is constructed by an entity based on a knowledge graph, the entity comprises personal information, dressing form, voice, behavior action, reaction, physiological signals and the like, and the module can label attribute values of all the entities by professional doctor specialists in a model stage required by a training data processing layer so as to construct a calculation model of all entity attributes; after model training is completed, the attribute values of each entity obtained through model analysis are displayed, so that a professional can conveniently correct error results and further train the model.
The reporting module is used for displaying attribute values of the entities and final results. The focused content and the focused key point of different roles are different, so that different contents are displayed for different roles. The visual means is used for displaying the results more clearly and easily.
The front-end pages are all written based on a reaction frame, the BFF layer is written based on an egg frame, and the background code is written by java. Data visualization may be implemented using javascript library Data-Driven Documents, which use SVG format, allowing the rendered shape to be scaled up or down without degrading quality.
The data storage layer is used for storing the data and the knowledge graph transmitted by the client layer and transmitting the data to the data processing layer; and plays a role of connecting the client layer and the data processing layer. Basic information of a user and the like can be saved by using a relational database mysql, a knowledge graph can be saved by using a graph database Neo4j, and audio, video and picture data used in a scale and collected physiological signals, audio and video information can be stored in an OSS cloud storage.
The data processing layer is used for preprocessing, extracting features and classifying the data collected by the received client to obtain each entity and corresponding entity attribute values in the knowledge graph, then calculating intensity indexes among different nodes, and further constructing the knowledge graph to obtain a diagnosis result of depression; the entity attribute value acquisition module and the knowledge graph module are respectively positioned in the data processing layer. The data processing layer calls each model interface to acquire each information from the oss end, calculates to obtain a conclusion, returns the conclusion to the data storage layer, and then displays the conclusion in the labeling module; retrieving the knowledge graph to obtain a final conclusion, returning the result to the data storage layer, and then displaying the result in the reporting module.
Compared with the existing depression identification method and system, the invention utilizes natural language processing technology to extract entities and entity attribute values in medical data, then establishes an algorithm model to calculate the entity attribute values, obtains the entities and entity attribute values, calculates the relation among all entity groups to construct a knowledge graph, and realizes intelligent and comprehensive depression diagnosis.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (3)

1. An intelligent depression diagnosis device based on a knowledge graph is characterized in that: comprising the following steps:
the data acquisition module is used for acquiring human body data of a user; the human body data includes video data, audio data, brain electrical data, and heart rate data;
the entity attribute value acquisition module is used for acquiring an entity and a corresponding entity attribute value from human body data by adopting a trained learning model;
the knowledge graph module is used for connecting the entity and the entity attribute value to form a knowledge graph so as to obtain a depression diagnosis result;
in the entity attribute value acquisition module, a learning model comprises an expression recognition model, an action recognition model, a dressing form recognition model, a speech speed intonation calculation model, a text analysis model, an emotion recognition model and a pressure classification model;
acquiring a picture sequence and pictures from video data; the entity obtained from the picture sequence by adopting the expression recognition model is expression; the entity obtained from the picture sequence by adopting the action recognition model comprises actions and reactions; the entity obtained from the picture by adopting the dressing form recognition model is the dressing form;
the entities obtained from the audio data by using the speech intonation calculation model comprise speech speed and intonation; the entity obtained from the audio data by adopting the text analysis model comprises semantic information; an entity obtained from the electroencephalogram data by adopting an emotion recognition model is emotion information; the entity obtained from heart rate data by adopting a pressure classification model is pressure information;
the entity corresponds to the entity attribute value as follows:
dress form-sloppy, surprise, normal;
speech speed-fast, slow, normal;
intonation-high, low, normal;
reaction-too fast, too slow, normal;
expression- -anger, aversion, fear, happiness, sadness, surprise, neutrality, lacrimation;
action-restlessness, rich language of limbs, little action and normal;
pressure information- -neutral, pressure, pleasure;
mood information-mood classification class;
in the entity attribute value acquisition module, acquiring an entity and a corresponding entity attribute value from medical data through a natural language processing technology to train a learning model; in the knowledge graph module, an entity and a corresponding entity attribute value are obtained from medical data through a natural language processing technology, and then a knowledge graph is constructed through the relation between the entities;
the knowledge graph module adopts a knowledge graph to realize construction, extracts the association relation among the entities, carries out knowledge reasoning on the constructed knowledge graph to obtain a deeper entity relation, and further obtains an expanded knowledge graph; the constructed knowledge graph is then stored in the Neo4j graph database.
2. The knowledge-graph-based depression intelligent diagnosis apparatus according to claim 1, wherein: the expression recognition model, the action recognition model, the dressing form recognition model, the speech speed intonation calculation model and the text analysis model respectively adopt a convolutional neural network model or a cyclic neural network model;
the emotion recognition model and the pressure classification model adopt a machine learning model.
3. The knowledge-graph-based depression intelligent diagnosis apparatus according to claim 1, wherein: the knowledge graph module also realizes iteration and perfection through a closed loop system.
CN202010170779.7A 2020-03-12 2020-03-12 Intelligent depression diagnosis device and system based on knowledge graph Active CN111462841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010170779.7A CN111462841B (en) 2020-03-12 2020-03-12 Intelligent depression diagnosis device and system based on knowledge graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010170779.7A CN111462841B (en) 2020-03-12 2020-03-12 Intelligent depression diagnosis device and system based on knowledge graph

Publications (2)

Publication Number Publication Date
CN111462841A CN111462841A (en) 2020-07-28
CN111462841B true CN111462841B (en) 2023-06-20

Family

ID=71684240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010170779.7A Active CN111462841B (en) 2020-03-12 2020-03-12 Intelligent depression diagnosis device and system based on knowledge graph

Country Status (1)

Country Link
CN (1) CN111462841B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111897972B (en) * 2020-08-06 2023-10-17 南方电网科学研究院有限责任公司 Data track visualization method and device
CN112148884B (en) * 2020-08-21 2023-09-22 北京阿叟阿巴科技有限公司 Systems and methods for autism intervention
CN112037911B (en) * 2020-08-28 2024-03-05 北京万灵盘古科技有限公司 Screening system for mental assessment based on machine learning and training method thereof
WO2022102721A1 (en) * 2020-11-11 2022-05-19 Assest株式会社 Depression-state-determining program
CN112925918B (en) * 2021-02-26 2023-03-24 华南理工大学 Question-answer matching system based on disease field knowledge graph
CN115399773A (en) * 2022-09-14 2022-11-29 山东大学 Depression state identification system based on deep learning and pulse signals
CN115630697B (en) * 2022-10-26 2023-04-07 泸州职业技术学院 Knowledge graph construction method and system capable of distinguishing single-phase and double-phase affective disorder
CN117056536B (en) * 2023-10-10 2023-12-26 湖南创星科技股份有限公司 Knowledge graph driving-based virtual doctor system and operation method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3223177A1 (en) * 2016-03-24 2017-09-27 Fujitsu Limited System and method to aid diagnosis of a patient
CN109545373A (en) * 2018-11-08 2019-03-29 新博卓畅技术(北京)有限公司 A kind of automatic abstracting method of human body diseases symptom characteristic, system and equipment
CN110083708A (en) * 2019-04-26 2019-08-02 常州市贝叶斯智能科技有限公司 A kind of medical bodies association analysis method of knowledge based map

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407733A (en) * 2016-12-12 2017-02-15 兰州大学 Depression risk screening system and method based on virtual reality scene electroencephalogram signal
CN106725532B (en) * 2016-12-13 2018-04-24 兰州大学 Depression automatic evaluation system and method based on phonetic feature and machine learning
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system
CN109346169A (en) * 2018-10-17 2019-02-15 长沙瀚云信息科技有限公司 A kind of artificial intelligence assisting in diagnosis and treatment system and its construction method, equipment and storage medium
CN109697233B (en) * 2018-12-03 2023-06-20 中电科大数据研究院有限公司 Knowledge graph system construction method
CN110675951A (en) * 2019-08-26 2020-01-10 北京百度网讯科技有限公司 Intelligent disease diagnosis method and device, computer equipment and readable medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3223177A1 (en) * 2016-03-24 2017-09-27 Fujitsu Limited System and method to aid diagnosis of a patient
CN109545373A (en) * 2018-11-08 2019-03-29 新博卓畅技术(北京)有限公司 A kind of automatic abstracting method of human body diseases symptom characteristic, system and equipment
CN110083708A (en) * 2019-04-26 2019-08-02 常州市贝叶斯智能科技有限公司 A kind of medical bodies association analysis method of knowledge based map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丁汉青等.情绪识别研究的学术场域――基于CiteSpace的科学知识图谱分析.新闻大学.2017,(第02期),第119-152页. *

Also Published As

Publication number Publication date
CN111462841A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462841B (en) Intelligent depression diagnosis device and system based on knowledge graph
Chen et al. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks
WO2019144542A1 (en) Affective interaction systems, devices, and methods based on affective computing user interface
CN112120716A (en) Wearable multi-mode emotional state monitoring device
CN112863630A (en) Personalized accurate medical question-answering system based on data and knowledge
CN111134666A (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
CN103996155A (en) Intelligent interaction and psychological comfort robot service system
CN105105771B (en) The cognition index analysis method of latent energy value test
CN109288518A (en) Brain cognition neural Function Appraising system and method based on EEG and ERPs
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN112766173A (en) Multi-mode emotion analysis method and system based on AI deep learning
CN111695442A (en) Online learning intelligent auxiliary system based on multi-mode fusion
CN112086169B (en) Interactive psychological dispersion system adopting psychological data labeling modeling
CN115064246A (en) Depression evaluation system and equipment based on multi-mode information fusion
CN108122004A (en) The brain electricity sorting technique of the sparse learning machine that transfinites is differentiated based on Fisher
CN116807476B (en) Multi-mode psychological health assessment system and method based on interface type emotion interaction
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
Huang et al. Electroencephalogram-based motor imagery classification using deep residual convolutional networks
Zhao et al. Research and development of autism diagnosis information system based on deep convolution neural network and facial expression data
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
CN113749656A (en) Emotion identification method and device based on multi-dimensional physiological signals
CN116883608A (en) Multi-mode digital person social attribute control method and related device
Majumder et al. A smart cyber-human system to support mental well-being through social engagement
CN116392148A (en) Electroencephalogram signal classification method, device, equipment and storage medium
CN115810424A (en) Cognitive behavioral assessment and treatment system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant