CN113116299A - Pain level evaluation method, pain level evaluation device, equipment and storage medium - Google Patents

Pain level evaluation method, pain level evaluation device, equipment and storage medium Download PDF

Info

Publication number
CN113116299A
CN113116299A CN202110246983.7A CN202110246983A CN113116299A CN 113116299 A CN113116299 A CN 113116299A CN 202110246983 A CN202110246983 A CN 202110246983A CN 113116299 A CN113116299 A CN 113116299A
Authority
CN
China
Prior art keywords
processing
neural network
sound
artificial neural
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110246983.7A
Other languages
Chinese (zh)
Other versions
CN113116299B (en
Inventor
刘志强
杜唯佳
周霆
阮宏洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaopeng Technology Co ltd
Shanghai First Maternity and Infant Hospital
Original Assignee
Shanghai Xiaopeng Technology Co ltd
Shanghai First Maternity and Infant Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiaopeng Technology Co ltd, Shanghai First Maternity and Infant Hospital filed Critical Shanghai Xiaopeng Technology Co ltd
Priority to CN202110246983.7A priority Critical patent/CN113116299B/en
Publication of CN113116299A publication Critical patent/CN113116299A/en
Application granted granted Critical
Publication of CN113116299B publication Critical patent/CN113116299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pain degree evaluation method, a pain degree evaluation device, equipment and a storage medium. The method comprises the following steps: acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person; respectively extracting expression feature vectors of the facial image and sound feature vectors in the mouth and nose sounds by using an artificial neural network; processing the sum of the expression feature vector and the sound feature vector by using a full connection layer, and performing one-dimensional processing on a processing result to obtain a first feature vector; performing label numeralization processing on the action parameters and the physiological parameters, and adding the processed data into the first characteristic vector to obtain a second characteristic vector; and classifying the second feature vectors to obtain an evaluation result. The method disclosed by the application fuses parameters of multiple dimensions of expression characteristics, sound characteristics, action parameters and physiological parameters, the pain degree is evaluated by processing the fused parameters of multiple dimensions through the artificial neural network, and the evaluation result is high in objectivity and accuracy.

Description

Pain level evaluation method, pain level evaluation device, equipment and storage medium
Technical Field
The application relates to the technical field of intelligent medical treatment, in particular to a pain degree evaluation method, a pain degree evaluation device, equipment and a storage medium.
Background
Pain is the fifth vital sign after respiration, pulse, body temperature and blood pressure, and is an important indicator for a doctor to diagnose a condition. With the development of society, the pressure of people in life, work and study is gradually increased, so that the body is in a sub-health state, the body generates various pain signals, and the pain can cause depression, suicide and the like.
The most common international tools for the quantitative assessment of pain are self-rating scales. Single dimension scales, such as the pain Numerical Rating Scale (NRS) and the Visual Analog Scale (VAS), allow patients to objectively express subjective pain perception through numbers, letters, images, etc., but only evaluate the intensity of pain. In addition, the perception of pain by individuals is differential, and one study result shows that the 95% confidence intervals for moderate and severe pain VAS are 15-83 mm and 30-100 mm, respectively. And both NRS and VAS scales have a capping effect, and as labor progresses, the nature and intensity of pain changes constantly, and it is difficult for people such as parturients to distinguish exactly between "severe pain", and "intolerable pain". In addition, the assessment of pain level is also influenced by factors such as the cultural level, emotion, sex and race of the patient. The existing pain degree evaluation tools belong to subjective evaluation scales for pathological, inflammatory and cancerous pain, evaluation results lack objectivity, and the evaluation tools have great limitations and low reference value and cannot provide sufficient reference value for medical workers.
Disclosure of Invention
An object of the present application is to provide a pain level evaluation method, a pain level evaluation apparatus, a device, and a storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of an embodiment of the present application, there is provided a pain level assessment method including:
acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person;
extracting expression feature vectors of the facial image by using a first artificial neural network, and extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network;
processing the sum of the expression feature vector and the sound feature vector by using a full connection layer, and performing one-dimensional processing on the obtained processing result to obtain a first feature vector;
performing label numeralization processing on the action parameters and the physiological parameters;
adding the data obtained through the label numeralization processing into the first feature vector to obtain a second feature vector;
and classifying the second feature vector to obtain an evaluation result.
Further, the extracting, by using a first artificial neural network, an expression feature vector of the facial image includes:
carrying out face alignment on the face image, and positioning face feature points;
and inputting the facial feature points into the first artificial neural network for processing to obtain the expression feature vector.
Further, the extracting, by using a second artificial neural network, the sound feature vector in the oronasal sound includes:
denoising the oral and nasal sound;
processing the denoised oral and nasal sounds to obtain a spectrogram;
and inputting the spectrogram into the second artificial neural network model for processing to obtain the sound characteristic vector.
Further, the processing the denoised oral-nasal sound comprises: and sequentially carrying out Hanning window adding, short-time Fourier transform and logarithm taking operation on the denoised oral and nasal sound.
Further, the motion parameter includes at least one of a grasping force, a muscle tremor degree, and a limb writhing degree, and the physiological parameter includes at least one of a blood pressure, a heart rate, and a breathing rate.
Further, the first artificial neural network comprises a plurality of convolution layers which are connected in sequence.
Further, the second artificial neural network includes a plurality of layer combinations connected in sequence, each of the layer combinations including two convolutional layers and one pooling layer connected in sequence.
According to another aspect of the embodiments of the present application, there is provided a pain level evaluation device including:
the acquisition module is used for acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person;
the extraction module is used for extracting expression feature vectors of the facial image by using a first artificial neural network and extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network;
the full-connection and one-dimensional module is used for processing the sum of the expression characteristic vector and the sound characteristic vector by using a full-connection layer and performing one-dimensional processing on the obtained processing result to obtain a first characteristic vector;
the label numeralization processing module is used for carrying out label numeralization processing on the action parameters and the physiological parameters;
the adding module is used for adding the data obtained through the label numeralization processing into the first feature vector to obtain a second feature vector;
and the classification module is used for classifying the second feature vector to obtain an evaluation result.
According to another aspect of the embodiments of the present application, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the pain level assessment method described above.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program, which is executed by a processor, to implement the above-described pain level assessment method.
The technical scheme provided by one aspect of the embodiment of the application can have the following beneficial effects:
the pain degree evaluation method provided by the embodiment of the application fuses the parameters of multiple dimensions of the expression characteristics, the sound characteristics, the action parameters and the physiological parameters, utilizes the artificial neural network to process the fused parameters of multiple dimensions to evaluate the pain degree, and is strong in objectivity of evaluation results, high in accuracy and high in reference value.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application, or may be learned by the practice of the embodiments. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 shows a schematic representation of the type of parameters characterizing labor pain;
FIG. 2 shows a flowchart of a pain level assessment method according to an embodiment of the present application;
FIG. 3 shows a flow chart of step S20 in the embodiment shown in FIG. 2;
FIG. 4 shows a flowchart of step S30 in the embodiment shown in FIG. 2;
FIG. 5 shows a schematic flow chart of a pain level assessment method according to another embodiment of the present application;
FIG. 6 is a diagram illustrating a label digitization process in the embodiment of FIG. 2;
fig. 7 is a block diagram showing a configuration of a pain level evaluating apparatus according to an embodiment of the present application;
fig. 8 shows a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Pain is the fifth vital sign after respiration, pulse, body temperature and blood pressure, is an important index for doctors to diagnose the disease condition, and the evaluation of pain is an important reference for doctors to make medical treatment plan. Taking labor pain as an example, the types of parameters characterizing pain during labor can be classified into emotion, motion, physiology, etc., as shown in fig. 1. Therefore, the evaluation of pain should be performed by combining parameters of multiple dimensions to improve the accuracy of the evaluation.
As shown in fig. 2, one embodiment of the present application provides a pain level assessment method, including the steps of:
and S10, acquiring the facial image, the mouth and nose sound, the action parameters and the physiological parameters of the target person.
The target person may be, for example, a parturient. The method includes the steps of performing video shooting aiming at the face of a target person in an application scene (such as a delivery room), and shooting the face of a parturient with a camera in the delivery room to obtain a video. And automatically intercepting an image frame containing the face of the target person from the shot video to obtain a face image.
In the interaction process of the target person and the intelligent device, a camera of the device shoots videos of the part above the neck of the target person in real time, and facial images are extracted so as to collect emotion parameter characteristics of the target person, wherein the emotion parameter characteristics include but are not limited to expression, muscle and skin color.
The medical device automatically acquires various physiological parameters including, but not limited to, blood pressure, heart rate, and respiratory rate at the same time. The smart device collects various behavioral parameters of the target person automatically, including mouth-nose sounds (the mouth-nose sounds are sounds emitted by the mouth and nose, including roaring, groaning, and the like, sounds emitted by the mouth of the target person, breathing sounds emitted by the nose of the target person, and the like), actions, and the like.
The mouth and nose sounds are obtained through recording in an application scene through the mobile recording equipment. The motion parameter may include at least one of a grip strength, a muscle tremor degree, and a limb writhing degree. The grip strength can be collected by a pressure grip meter. The muscle tremor degree and the limb wriggle degree can be acquired by sensors which are attached to the surface of the skin of the target person and are used for sensing the muscle tremor degree and the limb wriggle degree respectively. The physiological parameter includes at least one of blood pressure, heart rate, and respiratory rate.
And S20, extracting expression feature vectors of the facial image by using the first artificial neural network model.
In some embodiments, the first artificial neural network includes a number of convolutional layers connected in series.
The purpose of this step is to calculate the pain expression multi-dimensional feature vector F1 according to the facial expression in the facial image, transfer the collected facial image to the first artificial neural network model (an AI model), and calculate the expression feature vector by the AI algorithm (including AI algorithm pre-learning/training, deep neural network, convolutional neural network, etc.). The first artificial neural network model is a trained neural network model, and is composed of a plurality of convolutional layers and the like.
As shown in fig. 3, in some embodiments, step S20 may include the following steps:
s201, carrying out face alignment on the face image, and positioning face feature points.
And automatically positioning key feature points of the face, such as eyes, nose tips, mouth corner points, eyebrows, contour points of all parts of the face and the like according to the input face image. The face alignment method adopted here is a cascade linear regression model, and the face alignment problem can be regarded as learning a regression function Y, taking an image I as an input, and outputting θ as the position of a feature point (face shape):
θ=Y(I)
the cascaded regression models can be unified into the following framework:
learning multiple regression functions { f1,f2…fnTo approximate the function Y:
θ=Y(I)=fn(…f2(f1,I),I)…I)
fnrepresenting the difference between the current landmark position and the true landmark position.
S202, inputting the facial feature points into the first artificial neural network model for processing to obtain expression feature vectors.
For example, a plurality of layers of separable convolutions, a plurality of layers of 1 × 1 convolutions, may be performed on the facial feature points to obtain the expression feature vector F1Dimension 7 × 1024, representing features in the form of 1024 features 7 × 7.
In some embodiments, the structure of a particular first artificial neural network model is shown in table 1.
TABLE 1 detailed Structure and parameters of the first artificial neural network model
Figure BDA0002964444300000061
dw, represents depth wise convolution, also known as depth separable convolution.
pw, representing pointwise convolution, i.e. 1 × 1 convolution.
As shown in table 1, the first artificial neural network model in this embodiment includes a total of 17 convolutional layers.
In some embodiments, the expression feature vector may include parameters such as muscle distortion degree, skin color depth, and the like, and the muscle distortion degree may be characterized according to the degree of deviation of the facial feature points from the normal position.
And S30, extracting sound feature vectors in the oral and nasal sounds by using the second artificial neural network model.
In some embodiments, the second artificial neural network comprises a plurality of layer combinations connected in series, each of the layer combinations comprising two convolutional layers and one pooling layer connected in series.
The purpose of this step is to judge the painful oronasal sound multi-dimensional feature vector F2 according to the oronasal sound of the target person by inputting various oronasal sounds emitted by the target person into a second artificial neural network model (an AI oronasal sound recognition model), and calculating the sound feature vector through the model (relating to AI algorithm pre-learning/training, deep network, convolutional neural network, etc.). The second artificial neural network model is a trained neural network model, and is composed of a plurality of convolutional layers, a plurality of pooling layers and the like.
In certain embodiments, as shown in fig. 4, step S30 includes:
s301, denoising the collected oral and nasal sound.
Denoising the sound collected in an application scene (such as a delivery room), eliminating the environmental noise, and only keeping the mouth and nose sound emitted by the target personnel.
S302, processing the denoised oral and nasal sound to obtain a spectrogram.
Specifically, as shown in fig. 5, a hanning window (hanning), a short-time fourier transform (STFT) and a logarithm operation are sequentially performed on the mouth-nose sound after the noise removal, so as to obtain a spectrogram.
The Hanning window (hanning) function formula is
Figure BDA0002964444300000071
Where N is 0. ltoreq. n.ltoreq.N-1, N is the length of the window, and w [ N ] is the Hanning window function.
Short Time Fourier Transform (STFT) formula:
Figure BDA0002964444300000072
wherein L is a frame index, L is a frame shift, x [ n + lL ] is a [ n + lL ] th sampling point value of a time domain waveform, K represents the number of frequency points after short-time Fourier transform, and K is a frequency index.
A spectrogram formula:
y[k,l]=2log(X[k,l])
and S303, inputting the spectrogram into a second artificial neural network model for processing to obtain a sound characteristic vector.
The second artificial neural network model is divided into different oral and nasal sound volume levels according to the volume of the oral and nasal sound of the target person or the characteristics of sound categories (such as crying, tragic and screaming).
The spectrogram is subjected to multilayer convolution and pooling, and because the model needs to extract sounds such as crying, screaming, wheezing and the like instead of a specific voice recognition task, the purpose can be achieved by adopting a simple convolutional neural network, and time can be saved. In some embodiments, the second artificial neural network model structure is shown in table 2. Finally obtaining the sound characteristic vector F2Dimension 7 × 1024, representing features in the form of 1024 features 7 × 7.
TABLE 2 concrete Structure and parameters of the second artificial neural network model
Figure BDA0002964444300000081
Max pooling stands for maximum pooling layer.
As shown in table 2, the second artificial neural network model of this embodiment includes 15 layers in total, which are respectively Conv1a, Conv1b, Max pooling1, Conv2a, Conv2b, Max pooling2, Conv3a, Conv3b, Max pooling3, Conv4a, Conv4b, Max pooling4, Conv5a, Conv5b, and Max pooling5 connected in this order. Wherein Conv1a, Conv1b, Conv2a, Conv2b, Conv3a, Conv3b, Conv4a, Conv4b, Conv5a and Conv5b are convolutional layers, and Max pooling1, Max pooling2, Max pooling3, Max pooling4 and Max pooling5 are Max pooling layers. In the second artificial neural network model, one maximum pooling layer is set after every two convolutional layers.
And S40, processing the sum of the expression feature vector and the sound feature vector by using a full connection layer, and performing one-dimensional processing on the obtained processing result to obtain a first feature vector.
Specifically, the sum of the expression feature vector and the sound feature vector is sequentially input into the full-connection layer and the Flatten layer for processing, and a one-dimensional first feature vector is obtained.
The purpose of this step is to obtain the eigenvector F obtained in step S20 and step S301And F2And summing, inputting the sum of the two into a full-connection layer, and inputting a result obtained after the full-connection layer is processed into a Flatten layer to obtain a one-dimensional characteristic vector, wherein the dimension is n multiplied by 1. The one-dimensional feature vector can be represented as (a)1,a2,……,an) As shown in fig. 5.
And S50, performing label digitization processing on the collected motion parameters and the physiological parameters.
The motion parameters (such as the grasping strength, the muscle tremor degree, the limb wriggling degree and the like) and the physiological parameters (blood pressure, heart rate, respiratory rate) are subjected to label numeralization processing, and the processing method is to use the numerical value b for each specific parameter1,b2,b3…biIn this case, i represents the ith parameter, as shown in FIG. 6.
S60, obtaining the product by the numeralization of the labelAnd adding the obtained data into the first feature vector to obtain a second feature vector. The second eigenvector is an (n + m) × 1-dimensional vector. Data b obtained after label numeralization processing1,b2,b3…bi
And S70, classifying the second feature vectors to obtain an evaluation result.
Specifically, the second feature vector is input into a trained machine learning classifier for classification, and an evaluation result is obtained.
The second feature vector ((n + m) × 1-dimensional vector) is input to a machine learning classifier trained in advance, and is subjected to classification processing, so as to obtain an evaluation result, as shown in fig. 5. The evaluation result may be a level determined from a preset pain level.
The pain degree evaluation method provided by the embodiment of the application fuses the parameters of multiple dimensions of the expression characteristics, the sound characteristics, the action parameters and the physiological parameters, utilizes the artificial neural network to process the fused parameters of multiple dimensions to evaluate the pain degree, has strong objectivity of evaluation results, high accuracy and high reference value, and can provide great reference value for medical workers.
As shown in fig. 7, another embodiment of the present application provides a pain level evaluation device including:
the acquisition module is used for acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person;
the extraction module is used for extracting expression feature vectors of the facial image by using a first artificial neural network and extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network;
the full-connection and one-dimensional module is used for processing the sum of the expression characteristic vector and the sound characteristic vector by using a full-connection layer and performing one-dimensional processing on the obtained processing result to obtain a first characteristic vector;
the label numeralization processing module is used for carrying out label numeralization processing on the action parameters and the physiological parameters;
the adding module is used for adding the data obtained through the label numeralization processing into the first feature vector to obtain a second feature vector;
and the classification module is used for classifying the second feature vector to obtain an evaluation result.
The extraction module comprises a first sub-module and a second sub-module, the first sub-module is used for extracting expression feature vectors of the facial image by using a first artificial neural network, and the second sub-module is used for extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network.
Specifically, the first sub-module includes:
the positioning unit is used for carrying out face alignment on the face image and positioning facial feature points;
and the extraction unit is used for inputting the facial feature points into the first artificial neural network for processing to obtain the expression feature vector.
The second sub-module includes:
the denoising unit is used for denoising the oral and nasal sound;
the speech spectrogram acquiring unit is used for processing the denoised oral and nasal sounds to obtain a speech spectrogram;
and the extraction unit is used for inputting the spectrogram into the second artificial neural network model for processing to obtain the sound characteristic vector.
The spectrogram acquiring unit is specifically used for sequentially performing Hanning window adding, short-time Fourier transform and logarithm taking on the denoised oral and nasal sound.
Another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the pain level assessment method according to any of the above embodiments. As shown in fig. 8, in some embodiments, the electronic device 10 may include: the system comprises a processor 100, a memory 101, a bus 102 and a communication interface 103, wherein the processor 100, the communication interface 103 and the memory 101 are connected through the bus 102; the memory 101 stores a computer program that can be executed on the processor 100, and the processor 100 executes the computer program to perform the method provided by any of the foregoing embodiments of the present application.
The Memory 101 may include a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 101 is used for storing a program, and the processor 100 executes the program after receiving an execution instruction, and the method disclosed in any of the foregoing embodiments of the present application may be applied to the processor 100, or implemented by the processor 100.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and may include a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method in combination with the hardware.
The electronic device provided by the embodiment of the application and the method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
Another embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, the program being executed by a processor to implement the pain level assessment method of any of the above embodiments.
Compared with the prior art, the pain degree evaluation method provided by the embodiment of the application breaks through the limitation of the traditional subjective pain degree evaluation tool, eliminates the influence of individual factors (such as psychoemotion, cognitive level, social environment and cultural education) on the evaluation result, really realizes individualized pain degree evaluation, and has high accuracy and strong prediction capability of a prediction model constructed by an AI algorithm and a machine learning algorithm; the method integrates a plurality of factors related to pain, enhances the reliability and generalization capability of the assessment method, has clinical application value in that the assessment result obtained by personalized pain degree assessment has extremely strong reference value, and medical personnel can refer to the assessment result to select a proper analgesic technical means.
It should be noted that:
the term "module" is not intended to be limited to a particular physical form. Depending on the particular application, a module may be implemented as hardware, firmware, software, and/or combinations thereof. Furthermore, different modules may share common components or even be implemented by the same component. There may or may not be clear boundaries between the various modules.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The above-mentioned embodiments only express the embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method for assessing the level of pain, comprising:
acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person;
extracting expression feature vectors of the facial image by using a first artificial neural network, and extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network;
processing the sum of the expression feature vector and the sound feature vector by using a full connection layer, and performing one-dimensional processing on the obtained processing result to obtain a first feature vector;
performing label numeralization processing on the action parameters and the physiological parameters;
adding the data obtained through the label numeralization processing into the first feature vector to obtain a second feature vector;
and classifying the second feature vector to obtain an evaluation result.
2. The method of assessing the level of pain according to claim 1, wherein said extracting the expression feature vector of the facial image using the first artificial neural network comprises:
carrying out face alignment on the face image, and positioning face feature points;
and inputting the facial feature points into the first artificial neural network for processing to obtain the expression feature vector.
3. The method for assessing the level of pain according to claim 1, wherein said extracting the sound feature vectors in the oronasal sounds using the second artificial neural network comprises:
denoising the oral and nasal sound;
processing the denoised oral and nasal sounds to obtain a spectrogram;
and inputting the spectrogram into the second artificial neural network model for processing to obtain the sound characteristic vector.
4. The method of claim 3, wherein the processing the denoised oronasal sounds comprises: and sequentially carrying out Hanning window adding, short-time Fourier transform and logarithm taking operation on the denoised oral and nasal sound.
5. The pain level assessment method according to claim 1, wherein the action parameter comprises at least one of a grasping force, a muscle tremor degree and a limb writhing degree, and the physiological parameter comprises at least one of a blood pressure, a heart rate and a breathing rate.
6. The method of claim 1, wherein the first artificial neural network comprises a plurality of convolutional layers connected in series.
7. The method of claim 1, wherein the second artificial neural network comprises a plurality of layer combinations connected in series, each of the layer combinations comprising two convolutional layers and one pooling layer connected in series.
8. A pain level assessment device, comprising:
the acquisition module is used for acquiring a facial image, oral and nasal sounds, action parameters and physiological parameters of a target person;
the extraction module is used for extracting expression feature vectors of the facial image by using a first artificial neural network and extracting sound feature vectors in the mouth-nose sound by using a second artificial neural network;
the full-connection and one-dimensional module is used for processing the sum of the expression characteristic vector and the sound characteristic vector by using a full-connection layer and performing one-dimensional processing on the obtained processing result to obtain a first characteristic vector;
the label numeralization processing module is used for carrying out label numeralization processing on the action parameters and the physiological parameters;
the adding module is used for adding the data obtained through the label numeralization processing into the first feature vector to obtain a second feature vector;
and the classification module is used for classifying the second feature vector to obtain an evaluation result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-8.
CN202110246983.7A 2021-03-05 2021-03-05 Pain degree evaluation method, pain degree evaluation device, apparatus, and storage medium Active CN113116299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110246983.7A CN113116299B (en) 2021-03-05 2021-03-05 Pain degree evaluation method, pain degree evaluation device, apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110246983.7A CN113116299B (en) 2021-03-05 2021-03-05 Pain degree evaluation method, pain degree evaluation device, apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN113116299A true CN113116299A (en) 2021-07-16
CN113116299B CN113116299B (en) 2023-05-09

Family

ID=76772693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110246983.7A Active CN113116299B (en) 2021-03-05 2021-03-05 Pain degree evaluation method, pain degree evaluation device, apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN113116299B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks
CN108388890A (en) * 2018-03-26 2018-08-10 南京邮电大学 A kind of neonatal pain degree assessment method and system based on human facial expression recognition
CN110298241A (en) * 2019-05-21 2019-10-01 江苏爱朋医疗科技股份有限公司 Pain information processing method, device, equipment and storage medium
US20190313966A1 (en) * 2018-04-11 2019-10-17 Somniferum Labs LLC Pain level determination method, apparatus, and system
CN110338777A (en) * 2019-06-27 2019-10-18 嘉兴深拓科技有限公司 Merge the pain Assessment method of heart rate variability feature and facial expression feature
US20190320974A1 (en) * 2018-04-19 2019-10-24 University Of South Florida Comprehensive and context-sensitive neonatal pain assessment system and methods using multiple modalities
CN111276159A (en) * 2018-12-05 2020-06-12 阿里健康信息技术有限公司 Infant pronunciation analysis method and server
US20210052215A1 (en) * 2015-06-30 2021-02-25 University Of South Florida System and method for multimodal spatiotemporal pain assessment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210052215A1 (en) * 2015-06-30 2021-02-25 University Of South Florida System and method for multimodal spatiotemporal pain assessment
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks
CN108388890A (en) * 2018-03-26 2018-08-10 南京邮电大学 A kind of neonatal pain degree assessment method and system based on human facial expression recognition
US20190313966A1 (en) * 2018-04-11 2019-10-17 Somniferum Labs LLC Pain level determination method, apparatus, and system
US20190320974A1 (en) * 2018-04-19 2019-10-24 University Of South Florida Comprehensive and context-sensitive neonatal pain assessment system and methods using multiple modalities
CN111276159A (en) * 2018-12-05 2020-06-12 阿里健康信息技术有限公司 Infant pronunciation analysis method and server
CN110298241A (en) * 2019-05-21 2019-10-01 江苏爱朋医疗科技股份有限公司 Pain information processing method, device, equipment and storage medium
CN110338777A (en) * 2019-06-27 2019-10-18 嘉兴深拓科技有限公司 Merge the pain Assessment method of heart rate variability feature and facial expression feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
支瑞聪;周才霞;: "疼痛自动识别综述", 计算机系统应用 *

Also Published As

Publication number Publication date
CN113116299B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
De Melo et al. A deep multiscale spatiotemporal network for assessing depression from facial dynamics
CN111524608B (en) Intelligent detection and epidemic prevention system and method
JP7189257B2 (en) Method, apparatus and computer readable storage medium for detecting specific facial syndromes
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
US20210202094A1 (en) User interface for navigating through physiological data
KR20160010414A (en) Systems, methods, and computer-readable media for identifying when a subject is likely to be affected by a medical condition
KR20130136519A (en) Diagnosis assitance system utilizing panoramic radiographs, and diagnosis assistance program utilizing panoramic radiographs
CN114787883A (en) Automatic emotion recognition method, system, computing device and computer-readable storage medium
WO2020121308A9 (en) Systems and methods for diagnosing a stroke condition
CN110464367B (en) Psychological anomaly detection method and system based on multi-channel cooperation
EP4264627A1 (en) System for determining one or more characteristics of a user based on an image of their eye using an ar/vr headset
CN114334151A (en) Method and device for evaluating human health state based on head image
US20100150405A1 (en) System and method for diagnosis of human behavior based on external body markers
WO2021250854A1 (en) Information processing device, information processing method, information processing system, and information processing program
CN111048202A (en) Intelligent traditional Chinese medicine diagnosis system and method thereof
CN112716468A (en) Non-contact heart rate measuring method and device based on three-dimensional convolution network
CN110110750B (en) Original picture classification method and device
CN113116299A (en) Pain level evaluation method, pain level evaluation device, equipment and storage medium
CN116130088A (en) Multi-mode face diagnosis method, device and related equipment
CN115758122A (en) Sleep respiratory event positioning method and device based on multi-scale convolutional neural network
CN113344911B (en) Method and device for measuring size of calculus
EP4367609A1 (en) Integrative system and method for performing medical diagnosis using artificial intelligence
CN113990490A (en) Traumatic blood loss shock patient data medical system
Luo et al. Exploring adaptive graph topologies and temporal graph networks for eeg-based depression detection
Xu et al. An auxiliary screening system for autism spectrum disorder based on emotion and attention analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant