CN113570545A - Visual identification pain grading assessment method - Google Patents

Visual identification pain grading assessment method Download PDF

Info

Publication number
CN113570545A
CN113570545A CN202110609079.8A CN202110609079A CN113570545A CN 113570545 A CN113570545 A CN 113570545A CN 202110609079 A CN202110609079 A CN 202110609079A CN 113570545 A CN113570545 A CN 113570545A
Authority
CN
China
Prior art keywords
patient
image
pain
information
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110609079.8A
Other languages
Chinese (zh)
Inventor
胡安民
李惠萍
海超
钟雄雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Peoples Hospital
Original Assignee
Shenzhen Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Peoples Hospital filed Critical Shenzhen Peoples Hospital
Priority to CN202110609079.8A priority Critical patent/CN113570545A/en
Publication of CN113570545A publication Critical patent/CN113570545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4824Touch or pain perception evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Pain & Pain Management (AREA)
  • Hospice & Palliative Care (AREA)
  • Quality & Reliability (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method for grading evaluation of visual recognition pain, and belongs to the field of pain evaluation. The method comprises the following steps: acquiring image information of a patient; extracting posture information of the patient through a skeleton model based on the posture image, and extracting corresponding positions and relative distance information of the five sense organs of the patient through a five sense organ model based on the face image; and inputting the trained convolutional neural network and the long and short memory network model according to the extracted characteristic information to obtain a pain grading evaluation result of the patient. The technical scheme provided by the invention realizes the pain assessment method based on visual means, and the extracted data characteristics of the patient comprise the skeleton posture information of the patient and the facial information of five sense organs.

Description

Visual identification pain grading assessment method
Technical Field
The application relates to the field of artificial intelligence, in particular to a medical image recognition method.
Background
Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage. Pain as a subjective and complex perception, the intensity of pain is usually assessed in a clinical setting by self-reported scales. Pain assessment provides assistance in the decision making of relevant medical interventions.
Such subjective methods are considered the "gold standard" for pain measurement. However, in some cases the reliability of self-reporting may be affected by a range of physiological, psychological and environmental factors. For example, many pain sufferers, when repeatedly evaluated, tend to exacerbate their severity and hold a negative attitude.
In view of the defects, the inventor extracts facial expression features and posture features of the patient through a deep convolutional neural network, so that the accuracy of classification recognition or prediction is finally improved.
Disclosure of Invention
The invention aims at predicting delirium in advance, and solves the problem of intelligently predicting delirium on the basis of historical data of delirium patients. The method specifically comprises the following steps:
firstly, screening out a human body image of a patient through a preset human recognition model based on image information.
And then, carrying out median filtering on the human body image through a preset skeleton recognition model to extract human body skeleton data.
Then, the facial features of the patient are obtained through a preset face detection algorithm, the feature information of the five sense organs is extracted, and the relative distance between the five sense organs is calculated. Based on facial expression data of a patient, feature information of the five sense organs is extracted, relative positions of the five sense organs on the face are calculated, and relative distances between the five sense organs are calculated.
And then, inputting the skeleton data and the data continuous frame image set of the five sense organs into a convolutional neural network and a long-term memory neural network, and constructing a deep neural network model.
And then, labeling the expression categories of each continuous frame image set as the expression categories expected to be output through the model training. The preset expression categories include: no pain, little pain, slight pain, obvious pain, severe pain, sharp pain; the pain label category of the pain label sequentially corresponds to 0 point, 1-2 points, 3-4 points, 5-6 points, 7-8 points and 9-10 points in the score of the patient autonomous numerical rating scale.
And finally, outputting the preprocessed image to the training model and calculating a damage function to obtain a loss function value. And adjusting parameters of the deep neural network model through a back propagation algorithm, so that the deviation between the output value of the input and output image processed by the training model and the mapping value of the expression category of the image is within a preset allowable range.
According to the technical scheme, the pain level identification method comprises the steps of firstly extracting a target image of a patient, then respectively extracting human skeleton feature data and position data of five sense organs of the patient from the image, and finally analyzing pain level features of the target image through a pre-constructed model.
Compared with the prior art, the method and the device have the advantages that the posture characteristic and the expression characteristic data of the patient are respectively extracted to construct the model, and the pain level of the patient can be evaluated more timely and comprehensively.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below.
Fig. 1 is a block diagram of the present invention for visually recognizing pain level.
Detailed Description
The above and further features and advantages of the present invention are described in more detail below with reference to the accompanying drawings.
As shown in FIG. 1, the present invention is primarily directed to more accurately performing visual recognition pain level assessment, which incorporates real-time images of the patient's posture and facial expressions to dynamically assess the patient's pain.
Step S1: an image of a patient sample is acquired.
Specifically, an image containing a patient is input into a pre-constructed human image recognition model to acquire image information of the patient.
Step S2: posture information of the patient is acquired.
Specifically, posture information corresponding to the patient is extracted through a human posture model preset according to a human skeleton model.
Step S3: facial expression information of the patient is acquired.
Specifically, the image information of the patient is input into a preset facial expression extraction model, the position information of the five sense organs of the patient is extracted, and the distance corresponding to the five sense organs is calculated.
Step S4: the information was subjected to pain assessment.
Specifically, dynamic patient posture data and facial feature data are input into a preset model, and the pain state of the patient is output.
It should be noted that, for the video of the patient to be evaluated, the hospitalized patient is usually supine on the bed, the distance and position change is relatively small, and occlusion rarely occurs, and for this reason, a preset face tracking algorithm is adopted to combine with a preset face detection algorithm to accurately detect the posture feature and the facial features of the patient from the target image or the facial features and the facial features of the patient, so as to improve the detection performance.
The target image acquisition process is influenced by factors such as light, video acquisition equipment, patient angle and the like, which all interfere with the feature extraction of the patient. Therefore, the posture image and the face image are acquired from the target image and are preprocessed, and the bone posture and the five sense organs information obtained after the preprocessing are used as target processing data.
Wherein the characteristics of the bone pose may include: position information of the head, shoulders, upper limbs, palms, chests, abdomen, knees, and soles of the patient. Wherein the characteristics of the facial feature information may include: relative position and relative distance information of the five sense organs.
The above examples are only for illustrating the technical idea and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes or modifications made according to the technical solutions and concepts described above in the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (5)

1. A method for visually identifying a graded assessment of pain, comprising:
acquiring a sample image of a patient, wherein the sample image comprises a posture image and a face image of the patient;
and outputting corresponding features according to the extracted image information, and generating an image recognition result corresponding to the medical image to be recognized through the pain evaluation model.
2. The method of claim 1, wherein the patient pose image is generated by a skeletal model to predict locations of corresponding human skeletal points in the image.
3. The method of claim 1, wherein the facial image of the patient is generated by a pre-defined model of the five sense organs to generate position and relative distance data of the corresponding five sense organs of the image.
4. The method of claim 1, wherein the pain grading model is constructed by constructing a deep neural network model from a convolutional neural network and a long-term memory neural network.
5. The method of claim 4, wherein the parameters of the deep neural network model are adjusted by back propagation algorithm.
CN202110609079.8A 2021-06-01 2021-06-01 Visual identification pain grading assessment method Pending CN113570545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110609079.8A CN113570545A (en) 2021-06-01 2021-06-01 Visual identification pain grading assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110609079.8A CN113570545A (en) 2021-06-01 2021-06-01 Visual identification pain grading assessment method

Publications (1)

Publication Number Publication Date
CN113570545A true CN113570545A (en) 2021-10-29

Family

ID=78160982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110609079.8A Pending CN113570545A (en) 2021-06-01 2021-06-01 Visual identification pain grading assessment method

Country Status (1)

Country Link
CN (1) CN113570545A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116138733A (en) * 2022-09-01 2023-05-23 上海市第四人民医院 Visual pain grade scoring method and application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116138733A (en) * 2022-09-01 2023-05-23 上海市第四人民医院 Visual pain grade scoring method and application
CN116138733B (en) * 2022-09-01 2023-12-26 上海市第四人民医院 Visual pain grade scoring method and application

Similar Documents

Publication Publication Date Title
CN106682616B (en) Method for recognizing neonatal pain expression based on two-channel feature deep learning
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
CN107007257B (en) The automatic measure grading method and apparatus of the unnatural degree of face
CN112734757B (en) Spine X-ray image cobb angle measuring method
US11663845B2 (en) Method and apparatus for privacy protected assessment of movement disorder video recordings
CN114261095B (en) AI-based orthopedic 3D printing method and device
CN110427987A (en) A kind of the plantar pressure characteristic recognition method and system of arthritic
US11980491B2 (en) Automatic recognition method for measurement point in cephalo image
CN114565957A (en) Consciousness assessment method and system based on micro expression recognition
CN114220543B (en) Body and mind pain index evaluation method and system for tumor patient
Gaber et al. Automated grading of facial paralysis using the Kinect v2: a proof of concept study
CN114305418A (en) Data acquisition system and method for depression state intelligent evaluation
CN116128814A (en) Standardized acquisition method and related device for tongue diagnosis image
CN111062936A (en) Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN113570545A (en) Visual identification pain grading assessment method
CN114176616A (en) Venous thrombosis detection method, electronic device and storage medium
CN114240934B (en) Image data analysis method and system based on acromegaly
Zaki et al. Smart medical chatbot with integrated contactless vital sign monitor
CN115273176A (en) Pain multi-algorithm objective assessment method based on vital signs and expressions
CN113425298A (en) Method for analyzing depression degree by collecting data through wearable equipment
CN114569116A (en) Three-channel image and transfer learning-based ballistocardiogram ventricular fibrillation auxiliary diagnosis system
CN114155191A (en) Artificial intelligence system for judging growth and development of dentognathic face through cervical vertebra image
Singh Susaiyah et al. Classification of indirect immunofluorescence images using thresholded local binary count features
Wang et al. A pilot study on the performance of time-domain features in speech recognition based on high-density sEMG
Bhaskar et al. A survey on early detection and prediction of lung cancer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication