CN111466878A - Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition - Google Patents
Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition Download PDFInfo
- Publication number
- CN111466878A CN111466878A CN202010289861.1A CN202010289861A CN111466878A CN 111466878 A CN111466878 A CN 111466878A CN 202010289861 A CN202010289861 A CN 202010289861A CN 111466878 A CN111466878 A CN 111466878A
- Authority
- CN
- China
- Prior art keywords
- pain
- layer
- image
- images
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012544 monitoring process Methods 0.000 title claims abstract description 26
- 208000024891 symptom Diseases 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 20
- 238000003062 neural network model Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims description 34
- 239000013598 vector Substances 0.000 claims description 34
- 238000011176 pooling Methods 0.000 claims description 29
- 238000013507 mapping Methods 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 9
- 210000002569 neuron Anatomy 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 15
- 238000011156 evaluation Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008909 emotion recognition Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003183 myoelectrical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4824—Touch or pain perception evaluation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/746—Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Fuzzy Systems (AREA)
- Signal Processing (AREA)
- Hospice & Palliative Care (AREA)
- Pain & Pain Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for monitoring pain symptoms of a bedridden patient in real time based on expression recognition, wherein the method comprises the following steps: 1, establishing a pain expression training data set; 2, establishing a neural network model for analyzing the pain expression and training to obtain a pain grading model; 3, acquiring three real-time images of the face at the same moment, preprocessing the images, and inputting the preprocessed images into a neural network model to obtain probabilities corresponding to A pain levels; and selecting the pain level corresponding to the maximum probability as the pain level of the detected image at the current moment, and alarming the pain level exceeding the threshold value to realize real-time monitoring. The invention can accurately evaluate the pain in real time and automatically, thereby realizing the effective monitoring of the bedridden patient.
Description
Technical Field
The invention relates to the technical field of image recognition and medical treatment, in particular to a method and a device for monitoring a bedridden patient in real time based on expression recognition.
Background
Related technologies in the aspect of expression recognition are widely applied to the fields of security, entertainment, finance and the like and play an important role. The application mode and the application range of the method are still continuously improved and expanded, and the method has wide development prospect.
Medically, how to achieve an assessment of the pain level of a patient has been an important issue. Pain is medically defined as a subjective feeling, with high reference value for self-assessment of patients. However, some special patients, such as patients with loss of active consciousness, high-risk patients or incapacitating patients, often cannot describe their pain level in language, and at this time, manual evaluation by medical observers is the most common pain evaluation method. However, this manual evaluation has many disadvantages: such as low efficiency, limited persistence, subjectivity of the observer, etc.
At present, China is in conflict with rising medical requirements and shortage of medical care personnel, effective manual monitoring on pain degree of a bedridden patient is lacked, and real-time assessment on the pain degree of the patient is difficult to complete, so that the medical care personnel can not timely treat symptoms due to delay or loss of real-time grasping of the physical condition of the patient.
Automatic pain assessment technologies based on emotion recognition and deep learning are increasingly valued by scholars at home and abroad, and at present, the technologies are based on traditional machine learning algorithms, often collect many physiological indexes such as heart rate, blood pressure, blood oxygen saturation, brain waves, myoelectric signals and the like, but lack effective processing means related to pain recognition. The characteristic points selected on the face are few, the division of the face condition is relatively general, and the pain degree of the subjective feeling of the patient is difficult to accurately detect.
There are also new methods such as evaluation of data such as grip strength of a sensor by a patient suffering from pain, requiring the patient to be conscious and actively involved in the evaluation, which are greatly influenced by physical conditions and individual differences.
Therefore, how to realize accurate, efficient and automatic pain recognition by using technologies such as emotion recognition and the like to realize real-time monitoring of the bedridden patient is a problem to be solved.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a method and a device for monitoring pain symptoms of a bedridden patient in real time based on expression recognition, so that the pain can be accurately evaluated in real time and automatically, and the effective monitoring of the bedridden patient is realized.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a real-time monitoring method for pain symptoms of bedridden patients based on expression recognition, which is characterized by comprising the following steps of:
step 1.1, setting A pain grades with pain grades from low to high according to a pain grading scale, and respectively collecting corresponding pain expression picture data according to different pain grades;
step 1.2, preprocessing all pain expression picture data by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels;
step 1.3, converting the RGB image into a gray image so as to obtain a pain expression training data set;
the neural network model sequentially comprises a data input layer, M convolution fitting layer groups and an output processing layer group;
the data input layer is used for inputting N pieces of k × k pixel gray images in the pain expression training data set;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
wherein the first convolutional layer uses k ×2(m-2)Forming convolution kernel with convolution step length s, processing arbitrary n gray level images by the first convolution layer to obtain k × 2(m-2)Feature images with the dimension of k- (4m +2) × s × k- (4m +2) × s are input into a second convolutional layer after being subjected to first Relu nonlinear mapping;
the second convolutional layer uses k × 2(m-2)Forming convolution kernel with convolution step length s, processing the first mapped characteristic image by second convolution layer to obtain k × 2(m-2)Characteristic images with the dimensionality of (k-4ms) × (k-4ms) are input into the pooling layer after being subjected to second Relu nonlinear mapping;
the mth pooling layer uses the maximal pooling sampling with the pooling kernel of 2 × 2 and the step length of 1 to the feature image after the second mapping, and obtains k × 2(m-2)Dimension ofInputting the feature image into a regularization layer;
the mth regularization layer processes the input feature image by using discard regularization to obtain k × 2(m -2) Dimension ofThe characteristic image of (1);
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the flattening layer carries out dimensionality reduction processing on the characteristic image input by the Mth regularization layer to obtain a dimensionality ofInputting the feature vector into a first full-connection layer;
the first full-connection layer is full-connected to the feature vector input by the first full-connection layerEach neuron is processed to obtain a dimension ofThe characteristic vector is input into an M +1 regularization layer after 2M +1 Relu nonlinear mapping;
the (M +1) th regularization layer processes the feature image after the (2M +1) th mapping by using discarding regularization to obtain a dimensionInputting the feature vector into a second full-connection layer;
the second full-connection layer fully connects the input feature vectors of the M +1 th regularization layer to A neurons of the second full-connection layer, processes the feature vectors to obtain a feature vector with a dimension of A, and inputs the feature vector into a softmax regression classifier, so that probabilities corresponding to A pain levels are output, and training is completed;
preprocessing the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selecting the RGB image with the most complete face information as a detection image;
converting the detection image into a gray detection image of k × k, inputting the gray detection image into the neural network model to obtain the probabilities corresponding to A pain levels, selecting the pain level corresponding to the maximum probability as the pain level of the detection image at the current moment, and alarming if the pain level at the current moment exceeds a set threshold value to realize real-time monitoring.
The real-time monitoring method for the pain symptoms of the bedridden patients is also characterized in that the probability corresponding to the pain level A in the step 2 is mapped into one of (2 × A +1) pain level scores with the pain level from low to high according to the following mode;
step 2.1: the probabilities corresponding to the A pain levels are respectively recorded as a probability set { p }0,p1,…,pa,…,pA-1} of whichIn, paRepresenting the probability corresponding to the a-th pain level, obtaining a set of probabilities { p0,p1,…,pa,…,pA-1Maximum value p inmaxThe corresponding subscript max, resulting in the intermediate variable q being (2 × max + 1);
step 2.2: judgment of pmax>z1If yes, outputting corresponding scores q of the probability set in (2 × A +1) pain level scoresoutQ; otherwise, entering step 2.3; wherein z is1Is a first threshold value;
step 2.3: judging whether max is not equal to 0 and max is not equal to A-1, if yes, executing the step 2.4; otherwise, executing step 2.5;
step 2.4, judging pmax-1≥pmax+1If yes, outputting the fraction qoutOtherwise, a score q is outputout=q+1;
Step 2.5, judging whether max is equal to 0, if yes, executing step 2.6; otherwise, executing step 2.7;
step 2.6: judgment of p1>z2If yes, outputting the fraction qoutQ + 1; otherwise, the score q is outputoutQ-1; wherein z is2Is a second threshold value;
step 2.7, judging whether max is equal to A-1, if yes, executing step 2.8;
step 2.8, judge pA-2>z2If yes, outputting the fraction qoutQ-1, otherwise output qout=q+1。
The invention relates to a device for realizing real-time monitoring of pain symptoms of bedridden patients based on expression recognition, which is characterized by comprising the following components: the pain grading recognition system comprises an image acquisition module, a preprocessing module, a pain grading recognition module, an alarm module and an output module;
the image acquisition module acquires three real-time images of the human face at the same moment by using infrared cameras respectively arranged right above and at two sides of the bedside support and sends the three real-time images to the preprocessing module;
the preprocessing module preprocesses the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selects the RGB image with the most complete face information as a detection image;
the pain grading identification module comprises a pain grading model; the pain grading model comprises a data input layer, M sets of convolution fitting layers and an output processing layer;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the pain grading identification module identifies the pain grades of the detection images by using the pain grading model, obtains the probabilities corresponding to A pain grades, and selects the pain grade corresponding to the maximum probability as the pain grade of the detection image at the current moment;
the output module displays the probabilities and the detection images corresponding to the A pain levels;
the alarm module judges whether the pain level at the current moment exceeds a set threshold value, and if so, the alarm module carries out alarm processing to realize real-time monitoring.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention utilizes the technical means of deep learning to construct a pain grading model through the characteristic information which is contained in the facial expression and reflects the subjective feeling of the patient, thereby realizing the real-time automatic pain evaluation of the general bedridden patient and even the patient who loses the active consciousness or fails, and making corresponding real-time feedback and protection alarm according to the evaluation result; thereby reducing the work burden of medical care workers and providing more comprehensive medical guarantee and timely rescue for the bedridden patients.
2. The invention adopts a neural network with a VGG-like structure, obtains a pain recognition model through training of a pain expression data set, is used for recognizing input preprocessed facial expression information and giving a pain recognition result, and adopts a monitoring device comprising an image acquisition module, a preprocessing module, a pain grading recognition module, an alarm module and an output module in the aspect of hardware realization, thereby accurately, efficiently and automatically realizing real-time monitoring of pain symptoms of bedridden patients.
3. The invention adopts a multi-angle camera system to collect the facial information of the bedridden patient and carry out image processing such as graying, size scaling, key information extraction and the like; extracting and analyzing pain characteristics by using a convolutional neural network in a deep learning method; and giving out different forms of pain degree judgment results by using a mapping algorithm; thereby overcoming the defects of poor accuracy, low automation level, low repeatability, timeliness, low universality and the like in the prior art and realizing the effective monitoring of the bedridden patient.
4. By adopting the image processing technology, the method can reduce irrelevant information while retaining the key information, thereby improving the accuracy on one hand and improving the execution efficiency on the other hand; the convolutional neural network with a VGG-like structure is used, the number of parameters is reduced by applying technologies such as a small convolutional kernel, the training speed is high, the convolutional neural network has good feature extraction capability, and the detection accuracy rate of 86.1% can be realized; by utilizing the mapping algorithm, the judgment result can adapt to different pain scales, and compared with the prior art, the method has better universality; the multi-angle camera shooting is adopted, the accuracy is higher than that of the traditional technology adopting single-angle camera shooting, the device is suitable for detection of the bedridden patient in different postures such as lying on the side and lying on the back, and the universality is improved.
Drawings
FIG. 1 is a flow chart of a real-time monitoring method for a bedridden patient according to the present invention;
FIG. 2 is a flow chart of the operation of the real-time monitoring device for bedridden patients according to the present invention;
FIG. 3 is a schematic diagram of a neural network of the present invention;
FIG. 4 is a schematic diagram of a partial mapping algorithm of the present invention.
Detailed Description
In this embodiment, as shown in fig. 1, a method for real-time monitoring pain symptoms of a bedridden patient based on expression recognition is performed according to the following steps:
step 1.1, setting A pain grades with pain grades from low to high according to a pain grading scale, and respectively collecting corresponding pain expression picture data according to different pain grades;
in the specific implementation, according to a pain degree division table of the world health organization, the value of A is taken as 5, the A is sequentially divided into five levels of 0-IV, and the five levels respectively correspond to five pain levels of no pain, mild pain, moderate pain, severe pain and intolerable pain, and a pain expression picture data set is established according to the five pain levels.
Step 1.2, preprocessing all the pain expression picture data by utilizing a dlib tool to obtain an RGB image containing face information of k × k pixels, wherein the value k of the picture pixel is 64, and an image containing the face information of 64 × 64 pixels is obtained.
Step 1.3, converting the RGB image into a gray image so as to obtain a pain expression training data set;
the training set, the test set and the verification set are sequentially divided according to the ratio of 6: 2: 2 to form a pain data set, wherein V is { V1, V2, V3, … …, vN, … … and vN }, wherein vN represents information of the nth pain picture, N represents total number of pictures in the data set V of the pain expression pictures, vN comprises a pain level label (marking pain levels 0-IV), a two-dimensional pixel array { a0, a1, a2, a3... a4095} (64 × 64), and image use division (training, testing and verification are respectively represented by 0, 1 and 2)
the neural network model sequentially comprises a data input layer, M convolution fitting layer groups and an output processing layer group;
the data input layer is used for inputting N pieces of k × k pixel gray images in the pain expression training data set;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
wherein the first convolutional layer uses k × 2(m-2)Forming convolution kernel with convolution step length s, processing arbitrary n gray level images by the first convolution layer to obtain k × 2(m-2)Feature images with the dimension of k- (4m +2) × s × k- (4m +2) × s are input into a second convolutional layer after being subjected to first Relu nonlinear mapping;
the second convolutional layer uses k × 2(m-2)Forming convolution kernel with convolution step length s, processing the first mapped characteristic image by second convolution layer to obtain k × 2(m-2)Characteristic images with the dimensionality of (k-4ms) × (k-4ms) are input into the pooling layer after being subjected to second Relu nonlinear mapping;
the m pooling layer uses the maximal pooling sampling with the pooling kernel of 2 × 2 and the step size of 1 to the feature image after the second mapping, and obtains k × 2(m-2)Dimension ofInputting the feature image into a regularization layer;
the mth regularization layer processes the input feature image using discard regularization to obtain k × 2(m-2)Dimension ofThe characteristic image of (1);
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the flattening layer carries out dimensionality reduction processing on the characteristic image input by the Mth regularization layer to obtain a dimensionality ofInputting the feature vector into a first full-connection layer;
the first fully-connected layer fully connects the input feature vectors to itselfTreating each neuron to obtainTo a dimension ofThe characteristic vector is input into an M +1 regularization layer after 2M +1 Relu nonlinear mapping;
the (M +1) th regularization layer processes the feature image after the (2M +1) th mapping by using discarding regularization to obtain a dimensionInputting the feature vector into a second full-connection layer;
the second full-connection layer fully connects the input feature vectors of the (M +1) th regularization layer to A neurons of the second full-connection layer, processes the feature vectors to obtain a feature vector with a dimension of A, and inputs the feature vector into a softmax regression classifier, so that probabilities corresponding to A pain levels are output, and training is completed;
according to the actual test effect, in the neural network structure, the convolution step s is 1, the preprocessed image can be fully extracted, the usage number M of the convolution fitting layer group is 3, and the effect of better extracting image features and avoiding overfitting can be achieved.
In specific implementation, the constructed neural network model sequentially comprises the following layers:
(1) the data input layer is used for expanding data corresponding to each image array into a gray pixel value two-dimensional array with the size of 64 × 64 for the arrays comprising N pain expression pictures;
(2) convolution layer 1, adopting 32 groups of convolution kernels, wherein the size of each group of convolution kernels is 3 × 3 × 1, the step length of convolution is 1, obtaining 32 characteristic images of 62 × 62 through convolution, obtaining 32 characteristic images of 62 × 62 through convolution, sending the 32 characteristic images to the third layer of convolution layer after RE L U nonlinear mapping;
(3) convolution layer 2, adopting 32 groups of convolution kernels, wherein the size of each group of convolution kernels is 3 × 3 × 1, the convolution step is 1, obtaining 32 60 × 60 characteristic images through convolution, and sending the 32 60 × 60 characteristic images obtained through convolution to a fourth layer of pooling layer after RE L U nonlinear mapping;
(4) a pooling layer 1, namely using the maximum pooling sampling with a pooling kernel of 2 × 2 and a step length of 1 to obtain 32 30 × 30 characteristic images and sending the images to a fifth layer of regularization layer;
(5) the regularization layer 1 is used for discarding regularization with dropout being 0.25, solving the overfitting problem, obtaining 32 30 × 30 characteristic images and sending the images to a sixth convolution calculation layer;
(6) convolution layer 3, adopting 64 groups of convolution kernels, wherein the size of each group of convolution kernels is 3 × 3 × 1, the convolution step is 1, obtaining 64 characteristic images of 28 × 28 through convolution, sending the 64 characteristic images of 28 × 28 obtained through convolution to the seventh layer of convolution layer after RE L U nonlinear mapping;
(7) convolution layer 4, adopting 64 groups of convolution kernels, wherein the size of each group of convolution kernels is 3 × 3 × 1, the step length of convolution is 1, obtaining 64 26 × 26 characteristic images through convolution, and sending the 64 26 × 26 characteristic images obtained through convolution to the eighth pooling layer after RE L U nonlinear mapping;
(8) a pooling layer 2, namely using the maximum pooling sample with a pooling kernel of 2 × 2 and a step length of 1 to obtain 64 13 × 13 characteristic images and sending the characteristic images to a ninth layer of regularization layer;
(9) the regularization layer 2 is used for discarding regularization with dropout being 0.25 and is used for solving the overfitting problem, 64 13 × 13 characteristic images are obtained and are sent to a tenth convolution calculation layer;
(10) the convolutional layer 5 adopts 128 groups of convolution kernels, the size of each group of convolution kernels is 3 × 3 × 1, the convolution step length is 1, 128 characteristic images of 11 × 11 are obtained through convolution, the 128 characteristic images of 11 × 11 obtained through convolution are sent to the eleventh convolutional layer after being subjected to RE L U nonlinear mapping;
(11) a convolutional layer 6, adopting 128 groups of convolutional kernels, wherein the size of each group of convolutional kernels is 3 × 3 × 1, the step length of the convolution is 1, obtaining 128 9 × 9 characteristic images through convolution, and sending the 128 9 × 9 characteristic images obtained through the convolution to a twelfth pooling layer after the 128 characteristic images are subjected to RE L U nonlinear mapping;
(12) using a maximum pooling sample with a pooling kernel of 2 × 2 and a step length of 1 to obtain 128 4 × 4 characteristic images and sending the images to a thirteenth layer of regularization layer;
(13) using a discarding regularization with dropout of 0.25 to solve the over-fitting problem, obtaining 128 4 × 4 feature images and sending the images to a fourteenth flattening layer;
(14) a planarization layer: the input flattening device is used for flattening the input, and the multidimensional input is subjected to one-dimensional operation to obtain a characteristic vector with the size of 2048 and send the characteristic vector to the fifteenth full-connection layer;
(15) a full connection layer 1, wherein the output of the fourteenth planarization layer is fully connected to 256 neurons of the layer to obtain a 256-size characteristic vector, and the 256-size characteristic vector is sent to the sixteenth layer of the regularization layer after being subjected to RE L U nonlinear mapping;
(16) the regularization layer 4: using a dropout regularization with dropout being 0.5 to solve the over-fitting problem, obtaining a feature vector with the size of 256 and sending the feature vector to a seventeenth full-connection layer;
(17) full connection layer 2: and connecting the output of the sixteenth planarization layer to 5 neurons of the sixteenth planarization layer to obtain a 5-size feature vector, and processing the feature vector by using a softmax regression classifier, wherein the number of output nodes is 5.
The neural network model is then trained, resulting in a model for pain grading:
(1) and selecting a sufficient number of training samples from the training data set under the pain level as samples for each pain level, and inputting the samples into an untrained pain grading neural network.
(2) And aiming at each input sample, acquiring a first dimension feature vector of the sample obtained after the neural network processing.
(3) And substituting the characteristic vector of the first dimension of the sample into a loss function to calculate to obtain a loss value.
(4) And adjusting network parameters of the pain grading neural network according to the loss values, and testing and feeding back the identification accuracy by using the verification set. And performing iterative training on the pain grading neural network by using the data set, and stopping training after the iteration times reach the specified times to obtain the trained pain grading neural network, namely the model for grading pain.
In particular implementations, the neural network may be optimized using one of the algorithms including, but not limited to, stochastic gradient descent algorithms, adaptive moment estimation, and the like.
preprocessing the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selecting the RGB image with the most complete face information as a detection image;
converting the detected image into a gray-scale detected image of k × k, inputting the gray-scale detected image into a neural network model to obtain the probabilities corresponding to A pain levels, selecting the pain level corresponding to the maximum probability as the pain level of the detected image at the current moment, and alarming if the pain level at the current moment exceeds a set threshold value to realize real-time monitoring.
In the implementation, as shown in fig. 4, the probability corresponding to the a pain levels in step 2 is mapped to one of the (2 × a +1) pain level scores of the pain level from low to high as follows;
step 2.1: the probabilities corresponding to the A pain levels are respectively recorded as a probability set { p }0,p1,…,pa,…,pA-1In which p isaRepresenting the probability corresponding to the a-th pain level, obtaining a set of probabilities { p0,p1,…,pa,…,pA-1Maximum value p inmaxThe corresponding subscript max, resulting in the intermediate variable q being (2 × max + 1);
step 2.2: judgment of pmax>z1If yes, outputting corresponding scores q of the probability set in (2 × A +1) pain level scoresoutQ; otherwise, entering step 2.3; wherein z is1Is a first threshold value;
step 2.3: judging whether max is not equal to 0 and max is not equal to A-1, if yes, executing the step 2.4; otherwise, executing step 2.5;
step 2.4, judging pmax-1≥pmax+1If yes, outputting the fraction qoutQ-1, otherwise,output score qout=q+1;
Step 2.5, judging whether max is equal to 0, if yes, executing step 2.6; otherwise, executing step 2.7;
step 2.6: judgment of p1>z2If yes, outputting the fraction qoutQ + 1; otherwise, the score q is outputoutQ-1; wherein z is2Is a second threshold value;
step 2.7, judging whether max is equal to A-1, if yes, executing step 2.8;
step 2.8, judge pA-2>z2If yes, outputting the fraction qoutQ-1, otherwise output qout=q+ 1。
The mapping method is designed to meet the situation that different pain level medical standards are adopted under different use situations.
Another common pain grading criteria, "digital pain scale (NRS)", is exemplified here:
(1) the 0-10 stages in NRS are divided into five groups of 012, 234, 456, 678 and 89 '10', which respectively correspond to the 0-IV stages of the neural network output result. And taking the grade with the highest probability in the output result as the initial selection range of the NRS score. This operation can narrow the effective range of the NRS score value to be determined to within 3;
(2) if the primary probability value with the highest probability in the output result of the neural network is greater than 0.8, recording the NRS score as a middle item in the initial selection range, namely 1, 3, 5, 7 and 9;
(3) if the level with the highest probability in the neural network output result is the middle level I, II or the level III and the probability value is less than or equal to 0.8, comparing the left probability value and the right probability value of the item with the highest probability in the neural network output result: if the left side is larger, outputting the minimum value in the initial selection unit, namely 2, 4 or 6; if the right side is larger, outputting the maximum value in the primary selection unit, namely 4, 6 or 8;
(4) if the grade with the highest probability in the output result of the neural network is 0 (or IV) grade and the probability value is less than or equal to 0.8, the following processing is carried out: checking the probability value of the adjacent level (i.e. level I or level III) with the highest probability in the output result of the neural network, if the probability value exceeds 0.2, outputting 2 (or 8), otherwise, outputting 0 (or 10);
in this embodiment, as shown in fig. 2, a device for realizing real-time monitoring of pain symptoms of a bedridden patient based on expression recognition includes: the pain grading recognition system comprises an image acquisition module, a preprocessing module, a pain grading recognition module, an alarm module and an output module;
the image acquisition module acquires three real-time images of the face at the same moment by using infrared cameras respectively arranged right above and at two sides of the bedside support and sends the three real-time images to the preprocessing module; in particular, the first and second (c) substrates,
(1) because the testee may be in different postures when lying in bed and cannot ensure that the face faces a certain direction for a long time, the system adopts the camera groups with different angles of the front side, the left side and the right side, and carries out parallel processing on the images acquired by the cameras in the next steps so as to ensure that the system can effectively acquire information and further identify the testee in different postures;
(2) the camera group collects one picture at certain time intervals (such as at least every 10 seconds) so as to ensure the real-time property of the system for identifying and outputting pain grading information.
(3) The camera group sends the picture to the processing terminal through the communication module, and extracts the part containing the human face in the picture.
The preprocessing module preprocesses the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selects the RGB image with the most complete face information as a detection image,
(1) by recognizing the characteristic Region Of the facial organ, including the characteristic Region image Of the organs such as eyes, nose, mouth, etc., the position Of the face is confirmed, and ROI (Region Of Interest) framing is performed on the corresponding part. And the extracted image is a square face feature image.
(2) And (3) correspondingly preprocessing the image, namely adopting a nearest neighbor interpolation method to zoom the extracted image through a resize function and convert the image into a standard image with the size of 64 × 64, reading components of red, green and blue, calculating the gray value of each pixel point, and re-assigning the color components of the pixel points so as to obtain a gray image, thereby facilitating further uniform processing.
(3) Storing the image obtained in the last step into an array information form as a final result of image preprocessing;
the pain grading identification module comprises a pain grading model; the pain grading model comprises a data input layer, M groups of convolution fitting layers and an output processing layer;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the pain grading identification module is used for carrying out pain grade identification on the detection image by utilizing a pain grading model, obtaining the probability corresponding to the A pain grades, and selecting the pain grade corresponding to the maximum probability as the pain grade of the detection image at the current moment;
the output module displays the probabilities and the detection images corresponding to the A pain levels;
the alarm module judges whether the pain level at the current moment exceeds a set threshold value, and if so, the alarm module carries out alarm processing to realize real-time monitoring. In particular, the first and second (c) substrates,
(1) and inputting the image information obtained through preprocessing into the trained pain grading model, and processing the input information by using the pain grading model to obtain the pain grade evaluation result of the bedridden patient. In the link, the task of the expression recognition can be timed and automated through the script;
(2) and transmitting the real-time pain level evaluation result to the medical staff, displaying the pain level and the selected effective image, and reminding the medical staff through a buzzer when a certain alarm condition is met (for example, when the pain level obtained by analysis is greater than or equal to level III in the case of adopting a 0-IV level grading method).
Claims (3)
1. A real-time monitoring method for pain symptoms of bedridden patients based on expression recognition is characterized by comprising the following steps:
step 1, establishing a pain expression training data set:
step 1.1, setting A pain grades with pain grades from low to high according to a pain grading scale, and respectively collecting corresponding pain expression picture data according to different pain grades;
step 1.2, preprocessing all pain expression picture data by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels;
step 1.3, converting the RGB image into a gray image so as to obtain a pain expression training data set;
step 2, establishing a neural network model for analyzing the pain expression and training to obtain a pain grading model;
the neural network model sequentially comprises a data input layer, M convolution fitting layer groups and an output processing layer group;
the data input layer is used for inputting N pieces of k × k pixel gray images in the pain expression training data set;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
wherein the first convolutional layer uses k × 2(m-2)Forming convolution kernel with convolution step length s, processing arbitrary n gray level images by the first convolution layer to obtain k × 2(m-2)Feature images with the dimension of k- (4m +2) × s × k- (4m +2) × s are input into a second convolutional layer after being subjected to first Relu nonlinear mapping;
the second convolutional layer uses k × 2(m-2)Forming convolution kernel with convolution step length s, processing the first mapped characteristic image by second convolution layer to obtain k × 2(m-2)Characteristic images with the dimensionality of (k-4ms) × (k-4ms) are input into the pooling layer after being subjected to second Relu nonlinear mapping;
m thUsing the largest pooling sample with the pooling kernel of 2 × 2 and the step length of 1 to the feature image after the second mapping by the pooling layers, and obtaining k × 2(m-2)Dimension ofInputting the feature image into a regularization layer;
the mth regularization layer processes the input feature image by using discard regularization to obtain k × 2(m-2)Dimension ofThe characteristic image of (1);
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the flattening layer carries out dimensionality reduction processing on the characteristic image input by the Mth regularization layer to obtain a dimensionality ofInputting the feature vector into a first full-connection layer;
the first full-connection layer is full-connected to the feature vector input by the first full-connection layerEach neuron is processed to obtain a dimension ofThe characteristic vector is input into an M +1 regularization layer after 2M +1 Relu nonlinear mapping;
the (M +1) th regularization layer processes the feature image after the (2M +1) th mapping by using discarding regularization to obtain a dimensionInputting the feature vector into a second full-connection layer;
the second full-connection layer fully connects the input feature vectors of the M +1 th regularization layer to A neurons of the second full-connection layer, processes the feature vectors to obtain a feature vector with a dimension of A, and inputs the feature vector into a softmax regression classifier, so that probabilities corresponding to A pain levels are output, and training is completed;
step 3, respectively installing infrared cameras above and at two sides of the bed head on the bed head support, so as to acquire three real-time images of the face of the person at the same moment;
preprocessing the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selecting the RGB image with the most complete face information as a detection image;
converting the detection image into a gray detection image of k × k, inputting the gray detection image into the neural network model to obtain the probabilities corresponding to A pain levels, selecting the pain level corresponding to the maximum probability as the pain level of the detection image at the current moment, and alarming if the pain level at the current moment exceeds a set threshold value to realize real-time monitoring.
2. The method for real-time monitoring pain symptoms of bedridden patients according to claim 1, wherein the probability corresponding to the A pain levels in the step 2 is mapped to one of (2 × A +1) pain level scores with the pain levels from low to high as follows;
step 2.1: the probabilities corresponding to the A pain levels are respectively recorded as a probability set { p }0,p1,…,pa,…,pA-1In which p isaRepresenting the probability corresponding to the a-th pain level, obtaining a set of probabilities { p0,p1,…,pa,…,pA-1Maximum value p inmaxThe corresponding subscript max, resulting in the intermediate variable q being (2 × max + 1);
step 2.2: judgment of pmax>z1If yes, outputting corresponding scores q of the probability set in (2 × A +1) pain level scoresoutQ; otherwise, entering step 2.3; wherein z is1Is firstA threshold value;
step 2.3: judging whether max is not equal to 0 and max is not equal to A-1, if yes, executing the step 2.4; otherwise, executing step 2.5;
step 2.4, judging pmax-1≥pmax+1If yes, outputting the fraction qoutOtherwise, a score q is outputout=q+1;
Step 2.5, judging whether max is equal to 0, if yes, executing step 2.6; otherwise, executing step 2.7;
step 2.6: judgment of p1>z2If yes, outputting the fraction qoutQ + 1; otherwise, the score q is outputoutQ-1; wherein z is2Is a second threshold value;
step 2.7, judging whether max is equal to A-1, if yes, executing step 2.8;
step 2.8, judge pA-2>z2If yes, outputting the fraction qoutQ-1, otherwise output qout=q+1。
3. The utility model provides a realize real-time guardianship device of bed patient's painful symptom based on expression discernment which characterized in that includes: the pain grading recognition system comprises an image acquisition module, a preprocessing module, a pain grading recognition module, an alarm module and an output module;
the image acquisition module acquires three real-time images of the human face at the same moment by using infrared cameras respectively arranged right above and at two sides of the bedside support and sends the three real-time images to the preprocessing module;
the preprocessing module preprocesses the three real-time images by utilizing a dlib tool to obtain RGB images containing face information of k × k pixels, and selects the RGB image with the most complete face information as a detection image;
the pain grading identification module comprises a pain grading model; the pain grading model comprises a data input layer, M sets of convolution fitting layers and an output processing layer;
any mth convolution fitting layer group sequentially comprises two convolution layers, a pooling layer and a regularization layer;
the output processing layer group sequentially comprises: the device comprises a planarization layer, a first full-connection layer, an M +1 th regularization layer and a second full-connection layer;
the pain grading identification module identifies the pain grades of the detection images by using the pain grading model, obtains the probabilities corresponding to A pain grades, and selects the pain grade corresponding to the maximum probability as the pain grade of the detection image at the current moment;
the output module displays the probabilities and the detection images corresponding to the A pain levels;
the alarm module judges whether the pain level at the current moment exceeds a set threshold value, and if so, the alarm module carries out alarm processing to realize real-time monitoring.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010289861.1A CN111466878A (en) | 2020-04-14 | 2020-04-14 | Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010289861.1A CN111466878A (en) | 2020-04-14 | 2020-04-14 | Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111466878A true CN111466878A (en) | 2020-07-31 |
Family
ID=71751883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010289861.1A Pending CN111466878A (en) | 2020-04-14 | 2020-04-14 | Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111466878A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112820382A (en) * | 2021-02-04 | 2021-05-18 | 上海小芃科技有限公司 | Breast cancer postoperative intelligent rehabilitation training method, device, equipment and storage medium |
CN113057597A (en) * | 2021-03-25 | 2021-07-02 | 南通市第一人民医院 | Method and system for monitoring physiological state of puerpera in real time in production process |
CN113080855A (en) * | 2021-03-30 | 2021-07-09 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113100768A (en) * | 2021-04-14 | 2021-07-13 | 中国人民解放军陆军特色医学中心 | Computer vision incapability and damage effect evaluation system |
CN113499035A (en) * | 2021-07-12 | 2021-10-15 | 扬州大学 | Pain recognition system based on confidence interval fusion threshold criterion |
CN114224286A (en) * | 2020-09-08 | 2022-03-25 | 上海联影医疗科技股份有限公司 | Compression method, device, terminal and medium for breast examination |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163302A (en) * | 2019-06-02 | 2019-08-23 | 东北石油大学 | Indicator card recognition methods based on regularization attention convolutional neural networks |
CN110175596A (en) * | 2019-06-04 | 2019-08-27 | 重庆邮电大学 | The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks |
CN110321827A (en) * | 2019-06-27 | 2019-10-11 | 嘉兴深拓科技有限公司 | A kind of pain level appraisal procedure based on face pain expression video |
WO2019204700A1 (en) * | 2018-04-19 | 2019-10-24 | University Of South Florida | Neonatal pain identification from neonatal facial expressions |
CN110705430A (en) * | 2019-09-26 | 2020-01-17 | 江苏科技大学 | Multi-person facial expression recognition method and system based on deep learning |
-
2020
- 2020-04-14 CN CN202010289861.1A patent/CN111466878A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019204700A1 (en) * | 2018-04-19 | 2019-10-24 | University Of South Florida | Neonatal pain identification from neonatal facial expressions |
CN110163302A (en) * | 2019-06-02 | 2019-08-23 | 东北石油大学 | Indicator card recognition methods based on regularization attention convolutional neural networks |
CN110175596A (en) * | 2019-06-04 | 2019-08-27 | 重庆邮电大学 | The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks |
CN110321827A (en) * | 2019-06-27 | 2019-10-11 | 嘉兴深拓科技有限公司 | A kind of pain level appraisal procedure based on face pain expression video |
CN110705430A (en) * | 2019-09-26 | 2020-01-17 | 江苏科技大学 | Multi-person facial expression recognition method and system based on deep learning |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114224286A (en) * | 2020-09-08 | 2022-03-25 | 上海联影医疗科技股份有限公司 | Compression method, device, terminal and medium for breast examination |
CN112820382A (en) * | 2021-02-04 | 2021-05-18 | 上海小芃科技有限公司 | Breast cancer postoperative intelligent rehabilitation training method, device, equipment and storage medium |
CN113057597A (en) * | 2021-03-25 | 2021-07-02 | 南通市第一人民医院 | Method and system for monitoring physiological state of puerpera in real time in production process |
CN113080855A (en) * | 2021-03-30 | 2021-07-09 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113080855B (en) * | 2021-03-30 | 2023-10-31 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113100768A (en) * | 2021-04-14 | 2021-07-13 | 中国人民解放军陆军特色医学中心 | Computer vision incapability and damage effect evaluation system |
CN113100768B (en) * | 2021-04-14 | 2022-12-16 | 中国人民解放军陆军特色医学中心 | Computer vision incapability and damage effect evaluation system |
CN113499035A (en) * | 2021-07-12 | 2021-10-15 | 扬州大学 | Pain recognition system based on confidence interval fusion threshold criterion |
CN113499035B (en) * | 2021-07-12 | 2023-09-05 | 扬州大学 | Pain identification system based on confidence interval fusion threshold criterion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111466878A (en) | Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition | |
CN110287805B (en) | Micro-expression identification method and system based on three-stream convolutional neural network | |
CN109543526B (en) | True and false facial paralysis recognition system based on depth difference characteristics | |
CN107007257B (en) | The automatic measure grading method and apparatus of the unnatural degree of face | |
CN109009102B (en) | Electroencephalogram deep learning-based auxiliary diagnosis method and system | |
CN107133612A (en) | Based on image procossing and the intelligent ward of speech recognition technology and its operation method | |
CN113052113B (en) | Depression identification method and system based on compact convolutional neural network | |
CN112472048B (en) | Method for realizing neural network for identifying pulse condition of cardiovascular disease patient | |
CN112906748A (en) | 12-lead ECG arrhythmia detection classification model construction method based on residual error network | |
CN109508755B (en) | Psychological assessment method based on image cognition | |
CN116645721B (en) | Sitting posture identification method and system based on deep learning | |
CN110929687A (en) | Multi-user behavior recognition system based on key point detection and working method | |
CN113076878B (en) | Constitution identification method based on attention mechanism convolution network structure | |
CN114626419B (en) | Action recognition method based on channel state information in WIFI and improved convolutional neural network | |
CN109978873A (en) | A kind of intelligent physical examination system and method based on Chinese medicine image big data | |
CN111598868B (en) | Lung ultrasonic image identification method and system | |
CN116524612B (en) | rPPG-based human face living body detection system and method | |
CN114067435A (en) | Sleep behavior detection method and system based on pseudo-3D convolutional network and attention mechanism | |
CN109567832A (en) | A kind of method and system of the angry driving condition of detection based on Intelligent bracelet | |
CN114038564A (en) | Noninvasive risk prediction method for diabetes | |
CN118044813B (en) | Psychological health condition assessment method and system based on multitask learning | |
CN112562852A (en) | Cervical spondylosis screening device based on limb movement | |
CN110693510A (en) | Attention deficit hyperactivity disorder auxiliary diagnosis device and using method thereof | |
CN113974627A (en) | Emotion recognition method based on brain-computer generated confrontation | |
Li et al. | Multi-label constitution identification based on tongue image in traditional Chinese medicine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200731 |