CN115273176A - Pain multi-algorithm objective assessment method based on vital signs and expressions - Google Patents

Pain multi-algorithm objective assessment method based on vital signs and expressions Download PDF

Info

Publication number
CN115273176A
CN115273176A CN202210746252.3A CN202210746252A CN115273176A CN 115273176 A CN115273176 A CN 115273176A CN 202210746252 A CN202210746252 A CN 202210746252A CN 115273176 A CN115273176 A CN 115273176A
Authority
CN
China
Prior art keywords
pain
features
model
feature
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210746252.3A
Other languages
Chinese (zh)
Inventor
邓幼文
程军
廖胜辉
韩付昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Xiangya Hospital of Central South University
Original Assignee
Third Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Xiangya Hospital of Central South University filed Critical Third Xiangya Hospital of Central South University
Priority to CN202210746252.3A priority Critical patent/CN115273176A/en
Publication of CN115273176A publication Critical patent/CN115273176A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a pain multi-algorithm objective evaluation method based on vital signs and expressions, which can accurately and objectively represent the pain degree of a patient. The method comprises the steps of collecting vital signs and expressions of a patient at intervals of designated time, extracting expression features through a deep learning method, calculating the importance ((Scores value)) of each feature to output by using a minimum redundancy maximum correlation (mRMR) algorithm, screening out the importance features, inputting the importance features into a pain prediction model, and outputting the level number corresponding to pain after model calculation, thereby realizing individual accurate evaluation of pain.

Description

Pain multi-algorithm objective assessment method based on vital signs and expressions
Technical Field
The invention relates to the technical field of vital sign data analysis, expression recognition and pain assessment, in particular to a pain assessment method based on vital signs and expressions.
Background
Pain is an unpleasant sensory and emotional experience caused by tissue damage or potential tissue damage, and is not only a symptom, but also a disease. The world health organization indicates that pain is the fifth leading vital sign following blood pressure, pulse, body temperature. Pain can have adverse effects on various aspects of the patient, such as the cardiovascular system, the respiratory system, the digestive system, the urinary system, the skeletal muscle system, the neuroendocrine system, the psychological mood and the sleep. How to accurately and rapidly judge the pain condition of a patient is an important research topic in the current medical field.
Currently, the methods for evaluating pain commonly used in medicine are mainly divided into five types: visual Analogue Scale (VAS), digital pain rating scale (NRS), wong-Bank expression scale (FPS-R), complaint pain rating scale (VRS) and dictation pain rating scale (VDS). These methods fall into two broad categories, manual assessment by trained and familiar medical personnel, and self assessment by patients after short training. The five evaluation results depend on personal knowledge and experience more, are influenced by subjective factors such as personal emotion and the like, have great subjectivity, and cannot reflect the pain degree completely and objectively.
The vital signs of the human body comprise blood pressure, respiration, pulse and body temperature, and when the human body is stimulated by pain, the vital signs can be changed, particularly the pulse and the blood pressure change are most obvious. Most people can have the conditions of blood pressure rise, pulse pressure difference increase, rapid pulse increase and the like under the pain stimulation. This is the biggest feature of pain to human vital sign stimulation.
The expression is an important mode of non-verbal communication of people, contains rich emotional information, is the most important carrier of emotion, and is the main way for people to understand emotion. While emotions can also be used to convey information. Pain, as an unpleasant sensory and emotional experience in humans, can cause significant changes in expression. In the research aspect of expression recognition, expressions are generally divided into 6 types, such as happy, sad, angry, frightened, surprised and hated. In some methods for evaluating pain of infants, such as a children pain observation score (POCIS), an improved children pain behavior score (MBPS), a Dong-Andio-Oaku Children hospital pain score (CHEOPS), a Riley pain score (RIPS), a neonatal pain evaluation scale (NIPS), and the like, an evaluator takes the expression as one of main reference indexes, which also indicates that the expression really has an important measurement value for pain evaluation. However, there is currently no pain assessment method that uses expression as an inclusion indicator.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a pain objective assessment method based on vital signs and expressions and established by applying multiple algorithms.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the pain multi-algorithm objective assessment method based on vital signs and expressions comprises the following steps:
(1) Collecting vital signs of a patient in a non-pain stress state as data in a basic state; when a patient is painful, starting image acquisition, and shooting a front image of the face by using camera equipment;
(2) Identifying a face region, and collecting expression characteristics by adopting a deep learning method; the expression features comprise texture features, shallow features and deep features;
(3) Calculating the importance of each expression feature to output by using a minimum redundancy maximum correlation (mRMR) algorithm, screening out the importance expression features of each stage, and inputting the importance expression features into an intelligent self-adaptive pain grading model in a parallel connection mode;
the intelligent self-adaptive pain grading model is as shown in formula I and formula II:
Figure BDA0003719469510000021
Figure BDA0003719469510000022
wherein W is the redundancy of the important expression characteristic data, V is the correlation between the important expression characteristic data and pain VAS scores, s represents a characteristic subset, I represents mutual information, I and j represent two variables of the important expression characteristics, h represents a target VAS, and MIQ is mutual information entropy;
(4) Meanwhile, vital sign data under a pain stress state are collected, and the data are input into an intelligent self-adaptive pain grading model after being normalized;
the intelligent self-adaptive pain grading model is as shown in formula I and formula II:
Figure BDA0003719469510000023
Figure BDA0003719469510000024
wherein W is the redundancy of the vital sign data, V is the correlation of the vital sign data and pain VAS score, s represents a feature subset, I represents mutual information, I and j represent two variables of the vital sign, h represents a target class VAS, and MIQ is mutual information entropy;
the intelligent self-adaptive pain grading model adopts a depth stacking model, and the construction process is as follows: first, expression feature and sign data are adopted to train some advanced and promising tree integration models (including Extreme Trees (ET), decision Trees (DT), random Forests (RF), gradient Boosting Classifiers (GBC), extreme gradient boosting (XGBoost) and Catboost) for candidate stacked components. Secondly, the optimization objective function of our deep stack model adopts the average precision (accuracycacy) and the average Area Under Curve (AUC) of the candidate tree ensemble evaluator performing ten 10-fold cross validation processes on the training data set. Finally, selecting a specific estimator to build the deep stack network according to the consistency output (VAS) between the candidate tree integration evaluators. The method comprises the following specific steps: predicting the whole training set by all the trained base models, taking the predicted value of the ith base model to the ith training sample as the jth characteristic value of the ith sample in the new training set, and finally training based on the new training set. Similarly, in the prediction process, a new test set is formed through the prediction of all the base models, and finally the test set is predicted;
(5) Building an interpretable pain score mapping model by adopting a deep learning algorithm and a SHAP method; inputting the processed vital sign data and the expression characteristics into an interpretable pain score mapping model, and calculating pain grades of the patient according to the pain scores, wherein the pain grades comprise mild pain, moderate pain and severe pain;
the interpretable pain score mapping model is of formula iii:
Figure BDA0003719469510000031
wherein phiiSHAP value representing the ith feature, S represents a subset of features, M represents a number of features, fS∪{i}(S $ g) and fS(S) model predictions with and without feature i, respectively.
The SHAP interactive value represents the difference between the SHAP value of the characteristic i when the characteristic j exists and the SHAP value of the characteristic i when the characteristic j does not exist; the interpretable pain score mapping model can capture important interactions between features that may be missed, such as formula iv:
Figure BDA0003719469510000032
preferably, the vital signs include blood pressure, blood oxygen saturation and pulse.
Preferably, the texture features in step (2) have 348 dimensions, and relate to different types in 7 (see table 1 for seven types of feature descriptions), including: 73 geometric parameter features, 9 gray level histogram features, 220 gray level co-occurrence matrix features, 20 run matrix features, 5 gradient model features, 5 autoregressive model features and 16 wavelet features; the shallow feature refers to a 1024-dimensional shallow feature extracted from a 'res4b8_ relu' layer; the deep features refer to 2048 dimensional deep features extracted from 'pool5' layer.
TABLE 1
Figure BDA0003719469510000041
The deep learning features adopt Resnet pre-trained on a sea volume data set as a backbone, and on the basis of a pre-trained Resnet model, a depth residual error network ResNet-101 of a 101 layer is adopted to train and test a facial expression data set, so that the extraction of shallow features and deep features is quickly realized. The prior art generally directly adopts the ResNet-101 network to extract deep high-level semantics (such as the deep-level features) end to end, does not merge the texture features and the shallow-level features, and does not consider the compensation between the features. Because deep high level semantics generally have lower resolution, shallow features generally ignore spatial details, and texture features do not have high level semantics.
The invention is further illustrated below:
the invention comprises the following steps:
(1) The model identifies the characteristics of the vital signs and the expressions under the pain state and extracts the characteristics;
(2) Through the analysis of the vital sign data, the influence of the pain stimulation on the vital sign is correctly identified;
(3) And combining the vital sign data with the expression characteristics to objectively evaluate the pain stimulation degree.
Specifically, the following were:
1 expression analysis results in different degrees of pain stimulation:
the method comprises the following steps of collecting expression and vital sign data of a pain patient, extracting expression features through a deep learning method, processing the vital sign data by using a normalization method and the like (figure 1), wherein the expression features comprise texture features, shallow features and deep features (figure 2), and the specific contents are as follows:
(1) Texture characteristics: a total of 348 dimensions, relating to the different types of 7, include: 73 geometric parameter features, 9 gray level histogram features, 220 gray level co-occurrence matrix features, 20 run matrix features, 5 gradient model features, 5 autoregressive model features and 16 wavelet features.
(2) Shallow layer characteristics: 1024-dimensional shallow features extracted from the 'res4b8_ relu' layer.
(3) Deep layer characteristics: 2048 dimensional deep layer features extracted from the 'pool5' layer.
2, feature selection and feature fusion:
the importance (Scores value) of each feature to the output is calculated by using a minimum redundancy maximum correlation (mRMR) algorithm, the importance features of each stage are screened out, and the importance features are corresponding to the VAS score grade in a parallel connection mode (fig. 3).
3 interpretation of the results and models
(1) Building a model: the pain digital prediction method adopts the technical steps of a deep learning algorithm, a Shapley additive extensions (SHAP) method and the like to build an interpretable pain score mapping model, and achieves the aim of pain digital prediction based on basic vital signs and expressions (fig. 4 and fig. 5).
(2) Prediction accuracy of the model (see table 2): the total accuracy of the pain scoring mapping model is 93.53%, the area under the curve (AUC) reaches 0.9723, the prediction precision of moderate pain reaches 96.4%, and the prediction precision of severe pain reaches 85.7%.
TABLE 2
Figure BDA0003719469510000051
The present invention applies an emerging research field in current artificial intelligence-interpretable machine learning, which allows users to understand, trust, and effectively manage next-generation artificial intelligence solutions. Model interpretations can be divided into two categories: global interpretation and local interpretation.
Global interpretation means that the user can directly understand the entire model from its structure, and local interpretation is just to examine one input variable and explain why the model makes some decision. In recent years, many methods of interpreting machine learning models have been proposed. For example, a local interpretable model-agnostic interpretation method (LIME) using a local proxy model, which generates a new data set consisting of permuted samples, and then trains an interpretable model on the new data set; the Shapley value introduced by Shapley is a classical technique in gambling theory for determining the contribution each player of each model makes to success in a cooperative game; shapley additive extensions (SHAP) is a unified framework that accounts for predictions, and each SHAP value can estimate the contribution of each feature in the model.
The SHAP method has three significant advantages: (1) Shapley values can be obtained from the linear model by SHAP, and can satisfy the same properties as Shapley values: symmetry, virtuality, and additivity; (2) The SHAP associates the LIME and Shapley value methods, and unifies the interpretable machine learning field; (3) Compared with the method for directly calculating the Shapley value, the SHAP method is high in calculation speed. A SHAP module is added into a prediction framework to realize automatic and efficient interpretation of model prediction. Visualization of model predictions see fig. 6, the model is explained as follows:
in the context of interpretable VAS prediction, the beneficial effects of the present invention come from three aspects:
(1) Partial explanation. The local interpretation is simply to examine one input and explain why the model makes a certain decision. The result of the local interpretation by SHAP is shown in FIG. 7, from which we can see that: (1) each feature in the model pushes the predicted outcome from the base value to the final value. (2) Dark features increase pain risk and light features decrease pain risk. (3) The length of the dark/light features represents the contribution to the output of the model.
(2) Global interpretation. The user can understand the entire model directly from the structure of the model. We summarize the impact of all features in the graph, which can reflect the impact of feature importance distributions on the model output, as shown in fig. 8. For example, heart rate is the most important risk factor in pain analysis, with higher heart rates giving higher pain risks.
(3) And (4) interactive visualization. Complex interactions of features learned by the model can be understood as shown in fig. 9. We can see from the figure: the heart rate, the most important risk factor in pain analysis, has strong interaction with the systolic pressure factor, and the complex interaction of the factors is automatically calculated by the model.
(4) After the partial interpretation result of the model is rotated by 90 degrees clockwise and all the test set data are integrated, the constructed VAS prediction model can interpret the whole data set, as shown in FIG. 10.
The method of the invention can carry out scientific and objective evaluation on the pain and avoid interference of subjective factors. The method adopts technologies such as a minimum redundancy maximum correlation (mRMR) algorithm, a deep learning algorithm, a simple additive expressions (SHAP) method and the like to extract expression features of the face under pain stimulation, wherein the features comprise an imaging feature, a shallow feature of deep learning and a deep feature of deep learning. By analyzing, selecting and fusing the expression characteristics, an interpretable pain prediction model is built. The model can objectively reflect the pain degree of people according to the changes of expressions and vital signs, and has obvious scientific research value.
Drawings
FIG. 1 is an intelligent adaptive pain scoring model;
FIG. 2 is an expression feature extraction;
FIG. 3 is an expression feature selection and fusion;
FIG. 4 is a SHAP abstract diagram and a feature interaction diagram;
FIG. 5 is a pain score mapping model;
FIG. 6 is a visualization of model prediction;
FIG. 7 is a partial explanation;
FIG. 8 is a global interpretation;
FIG. 9 is a complex intersection;
FIG. 10 is an explanation of the entire test set;
FIG. 11 is the overall model workflow;
FIG. 12 is a comparison of pain prediction results based on the Random Forest Classifier model with VAS authentic labels;
FIG. 13 is a comparison of pain prediction results based on the Extra Trees Classifier model with VAS authentic signature;
fig. 14 is a confusion matrix result.
Detailed Description
As shown in fig. 11, the present invention provides a pain assessment method based on vital sign and expression recognition, which comprises the following steps:
step a: the vital signs (blood pressure, blood oxygen saturation, pulse) of the patient in the non-painful stress state are collected as data in the basic state. When a patient is painful, starting image acquisition, and shooting a front image of the face by using camera equipment;
step b: and (3) identifying the face region, and collecting expression features by adopting a deep learning method, wherein the features comprise texture features, shallow features and deep features. The texture features have 348 dimensions, and relate to different types in 7, including: 73 geometric parameter characteristics, 9 gray level histogram characteristics, 220 gray level co-occurrence matrix characteristics, 20 run matrix characteristics, 5 gradient model characteristics, 5 autoregressive model characteristics and 16 wavelet characteristics. The shallow feature refers to a 1024-dimensional shallow feature extracted from the 'res4b8_ relu' layer, and the deep feature refers to a 2048-dimensional deep feature extracted from the 'pool5' layer.
Step c: and calculating the importance (Scores value) of each feature to the output by using a minimum redundancy maximum correlation (mRMR) algorithm, screening out the importance features of each stage, and inputting the importance features into the intelligent self-adaptive pain grading model in a parallel connection mode.
Step d: and meanwhile, collecting vital sign data including blood pressure, blood oxygen saturation and pulse under a pain stress state, normalizing the data and inputting the data into the intelligent self-adaptive pain grading model.
Step e: the pain digital prediction method adopts the technical steps of a deep learning algorithm, a Shapley additive extensions (SHAP) method and the like to build an interpretable pain score mapping model, and achieves the aim of pain digital prediction based on basic vital signs and expressions. After the processed vital sign data and the expression characteristics are input into the model, the model can calculate a pain score, and the pain level of the patient is calculated according to the pain score, wherein the pain level comprises mild pain, moderate pain and severe pain.
The pain prediction models based on Random Forest Classifier and Extra Trees Classifier built by the invention have the accuracy of more than 90%, and the AUC value of more than 0.9 (figure 14). A comparison of the model automated prediction results and current clinical VAS results is shown in figures 12 and 13. The experimental results show that: the relevance R value of the predicted VAS value based on the Random Forest Classifier and the current clinical VAS value is 0.8332, and the relevance P value is 7.4712e-109 (P is less than 0.05); the correlation R value of the predicted VAS value based on the Extra Trees Classifier established by us and the current clinical VAS value is 0.7936, and the correlation P value is 1.3915e-91 (P is less than 0.05). The experimental result shows that the built VAS prediction model has no statistical difference with the current clinical VAS result, and can accurately and objectively reflect the pain degree of the patient.

Claims (3)

1. A method for multi-algorithm objective assessment of pain based on vital signs and expressions, the method comprising the steps of:
(1) Collecting vital signs of a patient in a non-pain stress state as data in a basic state; when a patient is painful, starting image acquisition, and shooting a front image of the face by using camera equipment;
(2) Identifying a face region, and collecting expression characteristics by adopting a deep learning method; the expression features comprise texture features, shallow features and deep features;
(3) Calculating the importance of each expression feature to output by using a minimum redundancy maximum correlation (mRMR) algorithm, screening out the importance expression features of each stage, and inputting the importance expression features into an intelligent self-adaptive pain grading model in a parallel connection mode;
the intelligent self-adaptive pain grading model is as shown in formula I and formula II:
Figure FDA0003719469500000011
Figure FDA0003719469500000012
wherein W is the redundancy of the important expression characteristic data, V is the correlation between the important expression characteristic data and pain VAS scores, s represents a characteristic subset, I represents mutual information, I and j represent two variables of the important expression characteristics, h represents a target VAS, and MIQ is mutual information entropy;
(4) Meanwhile, vital sign data under a pain stress state are collected, and the data are input into an intelligent self-adaptive pain grading model after being normalized;
the intelligent self-adaptive pain grading model is as shown in formula I and formula II:
Figure FDA0003719469500000013
Figure FDA0003719469500000014
wherein W is the redundancy of the vital sign data, V is the correlation of the vital sign data and pain VAS score, s represents a feature subset, I represents mutual information, I and j represent two variables of the vital sign, h represents a target class VAS, and MIQ is mutual information entropy;
(5) Building an interpretable pain score mapping model by adopting a deep learning algorithm and a SHAP method; inputting the processed vital sign data and the expression characteristics into an interpretable pain score mapping model, and calculating the pain score of the patient according to the pain score by the model, wherein the pain level comprises mild pain, moderate pain and severe pain;
the interpretable pain score mapping model is of formula iii:
Figure FDA0003719469500000021
wherein phiiSHAP value representing ith feature, S represents feature subset, M represents feature quantity, fS∪{i}(S { i }) and f { i })S(S) model predictions with and without feature i, respectively.
The SHAP interaction value represents the difference between the SHAP value of the feature i in the presence of the feature j and the SHAP value of the feature i in the absence of the feature j; the interpretable pain score mapping model can capture important interactions between features that may be missed, such as formula iv:
Figure FDA0003719469500000022
2. the multi-algorithm objective pain assessment method based on vital signs and expressions according to claim 1, wherein the vital signs include blood pressure, blood oxygen saturation and pulse.
3. The multi-algorithm objective assessment method for pain based on vital signs and expressions according to claim 1, wherein the texture features in step (2) have 348 dimensions, and relate to 7 different types, including: 73 geometric parameter features, 9 gray level histogram features, 220 gray level co-occurrence matrix features, 20 run matrix features, 5 gradient model features, 5 autoregressive model features and 16 wavelet features; the shallow feature refers to a 1024-dimensional shallow feature extracted from a 'res4b8_ relu' layer; the deep features refer to 2048 dimensional deep features extracted from 'pool5' layer.
CN202210746252.3A 2022-06-29 2022-06-29 Pain multi-algorithm objective assessment method based on vital signs and expressions Pending CN115273176A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210746252.3A CN115273176A (en) 2022-06-29 2022-06-29 Pain multi-algorithm objective assessment method based on vital signs and expressions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210746252.3A CN115273176A (en) 2022-06-29 2022-06-29 Pain multi-algorithm objective assessment method based on vital signs and expressions

Publications (1)

Publication Number Publication Date
CN115273176A true CN115273176A (en) 2022-11-01

Family

ID=83763120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210746252.3A Pending CN115273176A (en) 2022-06-29 2022-06-29 Pain multi-algorithm objective assessment method based on vital signs and expressions

Country Status (1)

Country Link
CN (1) CN115273176A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117860207A (en) * 2024-03-13 2024-04-12 长春理工大学 Video non-contact measurement pain identification method and system based on data analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117860207A (en) * 2024-03-13 2024-04-12 长春理工大学 Video non-contact measurement pain identification method and system based on data analysis
CN117860207B (en) * 2024-03-13 2024-05-10 长春理工大学 Video non-contact measurement pain identification method and system based on data analysis

Similar Documents

Publication Publication Date Title
CN108806792B (en) Deep learning face diagnosis system
JP6522161B2 (en) Medical data analysis method based on deep learning and intelligent analyzer thereof
CN111759345B (en) Heart valve abnormality analysis method, system and device based on convolutional neural network
JP7303531B2 (en) Method and apparatus for evaluating difficult airways based on artificial intelligence
CN110459328A (en) A kind of Clinical Decision Support Systems for assessing sudden cardiac arrest
CN117077786A (en) Knowledge graph-based data knowledge dual-drive intelligent medical dialogue system and method
CN107066514A (en) The Emotion identification method and system of the elderly
CN111341437B (en) Digestive tract disease judgment auxiliary system based on tongue image
CN113689954A (en) Hypertension risk prediction method, device, equipment and medium
CN117438048B (en) Method and system for assessing psychological disorder of psychiatric patient
CN110659420A (en) Personalized catering method based on deep neural network Monte Carlo search tree
CN115295114A (en) Analgesic administration method based on objective pain assessment system
CN115064246A (en) Depression evaluation system and equipment based on multi-mode information fusion
CN108492877A (en) A kind of cardiovascular disease auxiliary prediction technique based on DS evidence theories
CN111462082A (en) Focus picture recognition device, method and equipment and readable storage medium
CN118044813B (en) Psychological health condition assessment method and system based on multitask learning
CN113744877A (en) Chronic disease assessment and intervention system with disease related factor extraction module
Lu et al. Speech depression recognition based on attentional residual network
CN115273176A (en) Pain multi-algorithm objective assessment method based on vital signs and expressions
CN111047590A (en) Hypertension classification method and device based on fundus images
CN112037888B (en) Physiological health characteristic data monitoring method, device, equipment and storage medium
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
Zhang et al. Research on lung sound classification model based on dual-channel CNN-LSTM algorithm
CN113066572A (en) Traditional Chinese medicine auxiliary diagnosis system and method for enhancing local feature extraction
CN115547502B (en) Hemodialysis patient risk prediction device based on time sequence data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination