CN107392109A - A kind of neonatal pain expression recognition method based on deep neural network - Google Patents

A kind of neonatal pain expression recognition method based on deep neural network Download PDF

Info

Publication number
CN107392109A
CN107392109A CN201710497593.0A CN201710497593A CN107392109A CN 107392109 A CN107392109 A CN 107392109A CN 201710497593 A CN201710497593 A CN 201710497593A CN 107392109 A CN107392109 A CN 107392109A
Authority
CN
China
Prior art keywords
layer
pain
convolutional
deep neural
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710497593.0A
Other languages
Chinese (zh)
Inventor
卢官明
洪强
李晓南
闫静杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201710497593.0A priority Critical patent/CN107392109A/en
Publication of CN107392109A publication Critical patent/CN107392109A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of neonatal pain expression recognition method based on deep neural network, and convolutional neural networks are based on by introducing(CNN)With long short-term memory(LSTM)The deep learning method of network, applied in the work of neonatal pain Expression Recognition, can effectively identify that neonate is in quiet, crying state and induced pain operation causes the expressions such as mild pain, severe pain;Wherein, the time domain of video segment and spatial feature are extracted by introducing deep neural network, breach traditional engineer and extract the technical bottleneck of explicit expressive features, and improve and blocked in face, the discrimination under the complex situations such as oblique attitude, illumination variation and robustness.

Description

A kind of neonatal pain expression recognition method based on deep neural network
Technical field
The present invention relates to a kind of neonatal pain expression recognition method based on deep neural network, belong to machine learning with Mode identification technology.
Background technology
International pain association (International Association for the Study of Pain) determines pain Justice is:" a kind of offending sensation and the emotional experience with actual or potential tissue damage, belong to subjectivity sensation ".But Neonate can not describe the ability of pain, and therefore, International Association for Pain Research adds that " no ability to exchange but can not negate one again Individual has pain experience and needs appropriate lenitive possibility." neonate birth after all have perceive pain ability, Some medical care precesses taken in medical procedure neonate can cause neonatal pain.Such as various punctures, injection, office Portion's infection, operation, environment, nursing factors, maternal infuries, disease can all cause neonatal pain in itself etc..And these pains can draw Neonatal body general reaction is played, including breathing influences, cardiovascular instability, causes to detest contact, difficulty with feeding, influences it With the relation of father and mother, permanent damage is produced to neuron and the transmission of brain tissue signal, even result in the bad grade of future development its He negatively affects.
Pain Assessment is an important ring for control pain, if needs to carry out the intervention of pain therapy and evaluates treatment How is effect, and key is the accurate evaluation to pain.In conventional evaluation neonatal pain instrument, knitted caused by pain Eyebrow, wink, nasolabial groove is deepened, dehisced etc. " facial expression " is considered as most reliable pain monitoring index.
But the assessment to neonatal pain is referred to by specially trained and familiar every assessment technology in the world at present Target medical personnel carry out manual evaluation, this results in some practical problems, if desired for devoting a tremendous amount of time energy, Er Qieping Estimate result is often influenceed by subjective factors such as the experience of individual and moods, it is impossible to objectively responds neonatal pain degree.Therefore, Medical personnel are taken corresponding analgesia to arrange by a kind of objective, fast and effectively neonatal pain automatic evaluation system of exploitation in time The neonatal pain of mitigation is imposed to have very important significance and be worth.
The content of the invention
The technical problems to be solved by the invention are to provide a kind of neonatal pain expression based on deep neural network and known Other method, introduce convolutional neural networks and long memory network in short-term, it is possible to increase blocked in face, oblique attitude, illumination become Discrimination and robustness under the complex situations such as change.
In order to solve the above-mentioned technical problem the present invention uses following technical scheme:The present invention devises a kind of based on depth god Neonatal pain expression recognition method through network, comprises the following steps:
Each sample pain grade expression that step A. collections neonate corresponds to default each pain degree grade respectively regards Frequently, and step B is entered;
Step B. is directed to each sample pain grade expression video respectively, enters specific to sample pain grade expression video Row editing, each facial expression image frame is obtained, and then obtain the corresponding each group sample of each sample pain grade expression video difference This facial expression image frame, and the frame length T of unified each group sample facial expression image frame, and the resolution ratio of unified all facial expression image frames M × n, subsequently into step C;
Step C. builds convolutional neural networks and long memory network in short-term, and by the output end and length of convolutional neural networks When memory network input be connected, set up deep neural network by convolutional neural networks and long memory network in short-term, then Into step D;
Step D. uses each group sample facial expression image frame, and pain degree grade corresponding respectively as training sample This, is trained for the deep neural network set up, and obtains the deep neural network corresponding to neonate's Expression Recognition, so Enter step E afterwards;
Step E. gathers the actual expression video of neonate, and carries out picture frame adjustment, then uses and corresponds to neonate's table The deep neural network of feelings identification, is identified for the actual expression video of neonate, obtains corresponding pain degree grade.
As a preferred technical solution of the present invention:Default each pain degree grade includes neonate's calmness shape State, neonate's crying state, and because cause the general character operation caused by neonate's mild pain state, neonate have an intense pain shape State.
As a preferred technical solution of the present invention:In the step D, using each group sample facial expression image frame, and Corresponding pain degree grade is carried out as training sample by BPTT algorithms for set up deep neural network respectively Training.
As a preferred technical solution of the present invention:The convolutional neural networks built in the step C by inputting, Include first layer convolutional layer, second layer pond layer, third layer convolutional layer, the 4th layer of pond layer, layer 5 convolutional layer, six successively Layer pond layer, layer 7 convolutional layer and the 8th layer of full articulamentum.
As a preferred technical solution of the present invention:The convolutional neural networks are included each successively by inputting Individual layer is as follows:
First layer convolutional layer, using l1Individual k1×k1Convolution kernel carries out convolution for the facial expression image frame that resolution ratio is m × n Operation, obtain l1Individual resolution ratio is m1×n1Characteristic pattern, wherein, s1The convolution step-length of this layer of convolution kernel is represented, INT () represents bracket function;
Second layer pond layer, uses pre-set dimension as p1×p1Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains l1Individual resolution ratio is m2×n2Characteristic pattern, wherein, s2Represent the sliding step of this layer of sliding window;
Third layer convolutional layer, using l2Individual k2×k2Convolution kernel carries out convolution behaviour for the characteristic pattern that last layer convolutional layer is exported Make, obtain (l1×l2) individual resolution ratio is m3×n3Characteristic pattern;Wherein, s3Represent the convolution step-length of this layer of convolution kernel;
4th layer of pond layer, uses pre-set dimension as p2×p2Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains (l1×l2) individual resolution ratio is m4×n4Characteristic pattern, wherein, s4Represent the sliding step of this layer of sliding window;
Layer 5 convolutional layer, using l3Individual k3×k3Convolution kernel is rolled up for the characteristic pattern that last layer pond layer is exported Product operation, obtains (l1×l2×l3) individual resolution ratio is m5×n5Characteristic pattern;Wherein, s5Represent the convolution step-length of this layer of convolution kernel;
Layer 6 pond layer, uses pre-set dimension as p3×p3Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains (l1×l2×l3) individual resolution ratio is m6×n6Characteristic pattern, wherein, s6Represent the sliding step of this layer of sliding window;
Layer 7 convolutional layer, using l4Individual k4×k4Convolution kernel is rolled up for the characteristic pattern that last layer pond layer is exported Product operation, obtains (l1×l2×l3×l4) individual resolution ratio is m7×n7Characteristic pattern;Wherein, s7Represent the convolution step-length of this layer of convolution kernel;
8th layer of full the articulamentum, (l that layer 7 convolutional layer is exported1×l2×l3×l4) individual resolution ratio is m7×n7's Characteristic pattern connects into (l1×l2×l3×l4×m7×n7) dimension characteristic vector.
As a preferred technical solution of the present invention:The length built in the step C in short-term memory network by inputting out Begin, classify layer including predetermined number recurrent neural net network layers and one layer successively, wherein, each recurrent neural net network layers phase successively Even, the input finally with layer of classifying is connected.
As a preferred technical solution of the present invention:The classification layer in memory network is softmax points to the length in short-term Class layer.
As a preferred technical solution of the present invention:Memory network is by inputting in short-term for the length, successively including Ψ Individual recurrent neural net network layers and softmax classification layers, wherein each recurrent neural net network layers include T long short-term memory lists respectively Member, each length input gate that mnemon includes being sequentially connected respectively in short-term, forget door, Cell and out gate;Softmax classifies Layer is completely directed to the new life for gained facial expression image frame is classified after the processing of each recurrent neural net network layers successively The identification of youngster's expression.
A kind of neonatal pain expression recognition method based on deep neural network of the present invention uses above technical side Case compared with prior art, has following technique effect:A kind of neonate's pain based on deep neural network that the present invention designs Pain expression recognition method, by introducing the deep learning based on convolutional neural networks (CNN) and long short-term memory (LSTM) network Method, applied in the work of neonatal pain Expression Recognition, can effectively identify that neonate is in quiet, crying state And induced pain operation causes the expressions such as mild pain, severe pain;Wherein, video is extracted by introducing deep neural network The time domain and spatial feature of fragment, breach traditional engineer and extract the technical bottleneck of explicit expressive features, and carry It is high blocked in face, the discrimination under the complex situations such as oblique attitude, illumination variation and robustness.
Brief description of the drawings
Fig. 1 is a kind of signal of the neonatal pain expression recognition method based on deep neural network designed by the present invention Figure;
Fig. 2 is the structure chart of convolutional neural networks and long memory network in short-term;
Fig. 3 is the structure chart of long mnemon in short-term in long memory network in short-term.
Embodiment
The embodiment of the present invention is described in further detail with reference to Figure of description.
Deep learning theoretical extension is applied to the Expression Recognition field in dynamic video by the present invention, using one kind based on volume The deep neural network model of product neutral net (CNN) and long short-term memory (LSTM) network, is known with breaking through in traditional expression Technical bottleneck of the engineer with extracting explicit expressive features in other method, improve blocked in face, oblique attitude, illumination become Discrimination and robustness under the complex situations such as change, to develop a kind of area of computer aided neonate pain based on human facial expression recognition Pain assessment system provides new technical scheme, with help clinical staff much sooner, it is objective, assess exactly it is neonatal Pain degree.
As shown in figure 1, the present invention devises a kind of neonatal pain expression recognition method based on deep neural network, it is real Among the application of border, specifically comprise the following steps:
Each sample pain grade expression that step A. collections neonate corresponds to default each pain degree grade respectively regards Frequently, and step B is entered.Wherein, presetting each pain degree grade includes neonate's tranquility, neonate's crying state, with And neonate's mild pain state, neonate's severe pain state caused by causing general character operation.
Step B. is directed to each sample pain grade expression video respectively, enters specific to sample pain grade expression video Row editing, each facial expression image frame is obtained, and then obtain the corresponding each group sample of each sample pain grade expression video difference This facial expression image frame, and the frame length T of unified each group sample facial expression image frame, and the resolution ratio of unified all facial expression image frames M × n, subsequently into step C.
Step C. builds convolutional neural networks and long memory network in short-term, wherein, constructed convolutional neural networks are by inputting Start, successively including first layer convolutional layer, second layer pond layer, third layer convolutional layer, the 4th layer of pond layer, layer 5 convolution Layer, layer 6 pond layer, layer 7 convolutional layer and the 8th layer of full articulamentum;Constructed length in short-term memory network by inputting out Begin, classify layer including predetermined number recurrent neural net network layers and softmax successively, wherein, each recurrent neural net network layers according to Secondary to be connected, the finally input with softmax classification layers is connected.
Then the output end of convolutional neural networks is connected with the input of long memory network in short-term, by convolutional Neural net Network and long memory network in short-term set up deep neural network, as shown in Fig. 2 subsequently into step D.
Above-mentioned constructed convolutional neural networks are by inputting, included each layer is as follows successively:
First layer convolutional layer, using l1Individual k1×k1Convolution kernel carries out convolution for the facial expression image frame that resolution ratio is m × n Operation, obtain l1Individual resolution ratio is m1×n1Characteristic pattern, wherein, s1The convolution step-length of this layer of convolution kernel is represented, INT () represents bracket function.
Second layer pond layer, uses pre-set dimension as p1×p1Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains l1Individual resolution ratio is m2×n2Characteristic pattern, wherein, s2Represent the sliding step of this layer of sliding window.
Third layer convolutional layer, using l2Individual k2×k2Convolution kernel is rolled up for the characteristic pattern that last layer convolutional layer is exported Product operation, obtains (l1×l2) individual resolution ratio is m3×n3Characteristic pattern;Wherein, s3Represent the convolution step-length of this layer of convolution kernel.
4th layer of pond layer, uses pre-set dimension as p2×p2Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains (l1×l2) individual resolution ratio is m4×n4Characteristic pattern, wherein, s4Represent the sliding step of this layer of sliding window.
Layer 5 convolutional layer, using l3Individual k3×k3Convolution kernel is rolled up for the characteristic pattern that last layer pond layer is exported Product operation, obtains (l1×l2×l3) individual resolution ratio is m5×n5Characteristic pattern;Wherein, s5Represent the convolution step-length of this layer of convolution kernel.
Layer 6 pond layer, uses pre-set dimension as p3×p3Sliding window, the spy exported for last layer convolutional layer Sign figure carries out down-sampling, obtains (l1×l2×l3) individual resolution ratio is m6×n6Characteristic pattern, wherein, s6Represent the sliding step of this layer of sliding window.
Layer 7 convolutional layer, using l4Individual k4×k4Convolution kernel is rolled up for the characteristic pattern that last layer pond layer is exported Product operation, obtains (l1×l2×l3×l4) individual resolution ratio is m7×n7Characteristic pattern;Wherein, s7Represent the convolution step-length of this layer of convolution kernel.
8th layer of full the articulamentum, (l that layer 7 convolutional layer is exported1×l2×l3×l4) individual resolution ratio is m7×n7's Characteristic pattern connects into (l1×l2×l3×l4×m7×n7) dimension characteristic vector.
For long memory network in short-term, specifically by input, successively including Ψ recurrent neural net network layers and softmax Classification layer, wherein each recurrent neural net network layers include T long mnemons in short-term respectively, as shown in figure 3, each length is remembered in short-term Recall unit respectively include be sequentially connected input gate, forget door, Cell and out gate;Softmax classification layers for passing through successively Gained facial expression image frame is classified after each recurrent neural net network layers processing, i.e., is completely directed to the identification of neonate's expression.
Each door is described in detail below in wherein long mnemon in short-term:
Input gate:it=σ (Wvtvt+Whiht-1+Wcict-1+bi);
Forget door:ft=σ (Wvfvt+Whfht-1+WcfCt-1+bf);
Cell:ct=ft×ct-1+it×tanh(Wvcvt+Whcht-1+bc);
Out gate:ot=σ (Wvovt+Whoht-1+WcoCt+bo);
Implicit unit output:ht=ot+tanh(ct)。
Wherein, vt、ht-1、ct-1Represent respectively t input data, t-1 moment grow the output of mnemon in short-term and T-1 moment Cell output, WvtRepresent the weight matrix of long mnemon input data in short-term, WhiAnd WciRepresent respectively implicit Unit is connected to the weight matrix of input gate and Cell is connected to the weight matrix of input gate, WvfRepresent that long mnemon in short-term is defeated Enter data and be connected to the weight matrix for forgetting door, WhfAnd WcfRepresent respectively implicit unit be connected to the weight matrix of forgetting door and Cell is connected to the weight matrix for forgetting door, WvcAnd WhcRepresent that long mnemon input data in short-term is connected to Cell's respectively Weight matrix and implicit unit are connected to Cell weight matrix, Wvo、WhoAnd WcoThe long input of mnemon in short-term number is represented respectively The weight matrix of out gate is connected to according to the weight matrix and implicit unit that are connected to out gate and Cell is connected to out gate Weight matrix, biRepresent the bias vector of input gate, bfThe bias vector of door, b are forgotten in expressioncCell bias vector is represented, boThe bias vector of out gate is represented, σ () represents sigmoid functions,
Softmax classification layers are classified to the classification of neonatal pain expression, are comprised the following steps that:
The x of t inputt(xt∈[x1、…、xT]), the probability for being judged as classification u' is:
Wherein, u' ∈ U, U=[1 ..., u],WzRepresent weight, bzRepresent bias.
Then t judges sample xtAffiliated classification is expressed as:
The maximizing i.e. in u probable value, using the classification corresponding to the maximum u' of probable value as sample xtPoint Class result, uses ytRepresent.
So, for input T frame lengths frame sequence [x1、…、xT], it is possible to obtain T classification results [y1、…、 yT], finally by yTGeneric as the frame sequence.
Step D. uses each group sample facial expression image frame, and pain degree grade corresponding respectively as training sample This, is trained by BPTT algorithms for set up deep neural network, obtains the depth corresponding to neonate's Expression Recognition Neutral net is spent, subsequently into step E.
Above-mentioned steps D specifically, intercepts T frames from neonatal pain expression video storehouse in the video of different classes of expression Long frame sequence [x1、…、xT] training sample is used as, utilize BPTT (Back Propagation Through Time) algorithm pair CNN and LSTM networks are trained, the deep neural network optimized.
Step E. gathers the actual expression video of neonate, and carries out picture frame adjustment, then uses and corresponds to neonate's table The deep neural network of feelings identification, is identified for the actual expression video of neonate, obtains corresponding pain degree grade.
The above-mentioned designed neonatal pain expression recognition method based on deep neural network is applied among reality, Neonate's expression video under different conditions is gathered, controls each video length in the range of 10-15s, by the medical personnel of specialty Be calmness, mild pain to its hierarchical classification, have an intense pain and 4 classes of crying, establish neonatal pain expression video storehouse, by calmness, Mild pain, have an intense pain, this 4 class expression of crying difference corresponding label be 0-3, Face datection is carried out in Sample video, frame by frame Detect facial expression image and make normalized, and intercept several sequence lengths and train depth nerve net for 12 frame lengths Network;The deep neural network based on convolutional neural networks and long memory network in short-term is built, by pretreated each video Several sequence lengths are input of 12 frame lengths as the system, and it is utilized per the size that two field picture resolution ratio is 128 × 128 Convolutional neural networks extraction comprises the following steps that per frame neonate's facial expression space domain characteristic:
First layer convolutional layer, convolution is carried out respectively to the Facial Expression Image of input using 32 11 × 11 convolution kernels, returned One changes operation, sets convolution step-length as 3, generates the characteristic pattern of 32 sizes 40 × 40;
Second layer pond layer, 32 characteristic patterns generated using 2 × 2 window to last layer carry out down-sampling, and setting is slided Dynamic step-length is 2, generates the characteristic pattern that 32 sizes are 20 × 20;
Third layer convolutional layer, 32 characteristic patterns generated using 25 × 5 convolution kernels to last layer carry out convolution behaviour respectively Make, set convolution step-length as 1, generate the characteristic pattern that 64 sizes are 16 × 16;
4th layer of pond layer, 2 × 32 characteristic patterns generated using 2 × 2 window to last layer carry out down-sampling, setting Sliding step is 2, generates the characteristic pattern that 2 × 32 sizes are 8 × 8;
Layer 5 convolutional layer, 64 characteristic patterns generated using 23 × 3 convolution kernels to last layer carry out convolution behaviour respectively Make, set convolution step-length as 1, generate the characteristic pattern that 128 sizes are 6 × 6;
Layer 6 pond layer, 128 characteristic patterns generated using 2 × 2 windows to last layer carry out down-sampling, and setting is slided Step-length is 2, generates the characteristic pattern that 128 sizes are 3 × 3;
Layer 7 is convolutional layer, and 128 characteristic patterns generated using 22 × 2 convolution kernels to last layer carry out convolution respectively Operation, sets convolution step-length as 1, generates the characteristic pattern that 256 sizes are 2 × 2;
8th layer is full articulamentum, and full articulamentum connects into the characteristic pattern of 256 2 × 2 sizes of layer 7 convolutional layer 1024 dimensional feature vectors, as the input of LSTM networks.
The mnemon that the present invention is introduced by LSTM, can effectively express the sequencing of frame, it can be to convolution Feature carries out the fusion of longer time, is not subject to the upper limit to the frame number of processing, so as to express the video of longer duration, The network parameter is optimized using BPTT (Back Propagation Through Time) algorithm, distortion function is become In stationary value, the deep neural network model optimized, wherein BPTT (Back Propagation Through Time) are calculated Method is described as follows:
Define error function (n0Represent initial time, n1Represent the end time)
Wherein, outputs is the set of network output layer unit, ej(n)=dj(n)-yj(n), dj(n) it is to export at the n moment Layer neuron j is for the target output value of training sample, yj(n) it is the n moment output layer nerve during input to be used as using training sample First j real output value.
Wherein, in section [n0,n1] on data are done before to computing, preserve complete input data record, network state (weights) and desired output;
To the past, this record performs a simple counterpropagation network, calculates partial gradient;
Wherein, vj(n) be n moment j neurons net input.
When the calculating of backpropagation returns to n0When+1, to neuron j synaptic weight wjiAdjustment is as follows:
Wherein, η is learning rate, xi(n-1) be n-1 moment neurons i input.
For long memory network in short-term, specifically by input, successively including Ψ recurrent neural net network layers and softmax Classification layer, wherein each recurrent neural net network layers include T long mnemons in short-term respectively, each length in short-term distinguish by mnemon Including be sequentially connected input gate, forget door, Cell and out gate;Softmax classification layers are for successively by each recurrence god Gained facial expression image frame is classified after network layer handles, i.e., is completely directed to the identification of neonate's expression.
Each door is described in detail below in wherein long mnemon in short-term:
Input gate:it=σ (Wvtvt+Whiht-1+Wcict-1+bi);
Forget door:ft=σ (Wvfvt+Whfht-1+WcfCt-1+bf);
Cell:ct=ft×ct-1+it×tanh(Wvcvt+Whcht-1+bc);
Out gate:ot=σ (Wvovt+Whoht-1+WcoCt+bo);
Implicit unit output:ht=ot+tanh(ct)。
Wherein, vt、ht-1、ct-1Represent respectively t input data, t-1 moment grow the output of mnemon in short-term and T-1 moment Cell output, WvtRepresent the weight matrix of long mnemon input data in short-term, WhiAnd WciRepresent respectively implicit Unit is connected to the weight matrix of input gate and Cell is connected to the weight matrix of input gate, WvfRepresent that long mnemon in short-term is defeated Enter data and be connected to the weight matrix for forgetting door, WhfAnd WcfRepresent respectively implicit unit be connected to the weight matrix of forgetting door and Cell is connected to the weight matrix for forgetting door, WvcAnd WhcRepresent that long mnemon input data in short-term is connected to Cell's respectively Weight matrix and implicit unit are connected to Cell weight matrix, Wvo、WhoAnd WcoThe long input of mnemon in short-term number is represented respectively The weight matrix of out gate is connected to according to the weight matrix and implicit unit that are connected to out gate and Cell is connected to out gate Weight matrix, biRepresent the bias vector of input gate, bfThe bias vector of door, b are forgotten in expressioncCell bias vector is represented, boThe bias vector of out gate is represented, σ () represents sigmoid functions,
In training network, trained and missed using BPTT (Back Propagation Through Time) algorithmic minimizing Difference, its distortion function are as follows:
Wherein, V, W are CNN and LSTM eigentransformation parameters respectively, and D is that training sample is total, xtIt is the defeated of t Enter.
The x of t inputt(xt∈[x1,…,xT]) probability that is judged as classification u' is:
Wherein, u' ∈ U, U=[1 ..., u],WzRepresent weight, bzRepresent bias.
Then t judges sample xtAffiliated classification is expressed as:
The maximizing i.e. in u probable value, using the classification corresponding to the maximum u' of probable value as sample xtPoint Class result, uses ytRepresent.
So, for input T frame lengths frame sequence [x1、…、xT], it is possible to obtain T classification results [y1、…、 yT], finally by yTGeneric as the frame sequence.
Step E. intercepts 12 frame lengths by interval of continuous 3 frame length every time from the neonate's facial expression video newly inputted Input of the frame sequence as neonatal pain Expression Recognition system, is classified using the deep neural network model of optimization.
Embodiments of the present invention are explained in detail above in conjunction with accompanying drawing, but the present invention is not limited to above-mentioned implementation Mode, can also be on the premise of present inventive concept not be departed from those of ordinary skill in the art's possessed knowledge Make a variety of changes.

Claims (8)

1. a kind of neonatal pain expression recognition method based on deep neural network, it is characterised in that comprise the following steps:
Step A. collection neonates correspond to each sample pain grade expression video for presetting each pain degree grade respectively, and Into step B;
Step B. is directed to each sample pain grade expression video respectively, is cut specific to sample pain grade expression video Volume, each facial expression image frame is obtained, and then obtain the corresponding each group sample table of each sample pain grade expression video difference Feelings picture frame, and the frame length T of unified each group sample facial expression image frame, and unified all facial expression image frames resolution ratio m × N, subsequently into step C;
Step C. builds convolutional neural networks and long memory network in short-term, and the output end of convolutional neural networks and length are remembered in short-term The input for recalling network is connected, and deep neural network is set up by convolutional neural networks and long memory network in short-term, subsequently into Step D;
Step D. uses each group sample facial expression image frame, and pain degree grade corresponding respectively as training sample, pin The deep neural network set up is trained, obtains the deep neural network corresponding to neonate's Expression Recognition, Ran Houjin Enter step E;
Step E. gathers the actual expression video of neonate, and carries out picture frame adjustment, then knows using corresponding to neonate's expression Other deep neural network, it is identified for the actual expression video of neonate, obtains corresponding pain degree grade.
2. a kind of neonatal pain expression recognition method based on deep neural network, its feature exist according to claim 1 In:Default each pain degree grade includes neonate's tranquility, neonate's crying state, and because causing general character operation Caused neonate's mild pain state, neonate have an intense pain state.
3. a kind of neonatal pain expression recognition method based on deep neural network, its feature exist according to claim 1 In:In the step D, using each group sample facial expression image frame, and pain degree grade corresponding respectively is as training sample This, is trained by BPTT algorithms for set up deep neural network.
A kind of 4. neonatal pain Expression Recognition based on deep neural network according to any one in claims 1 to 3 Method, it is characterised in that:The convolutional neural networks built in the step C by inputting, successively including first layer convolutional layer, Second layer pond layer, third layer convolutional layer, the 4th layer of pond layer, layer 5 convolutional layer, layer 6 pond layer, layer 7 convolutional layer With the 8th layer of full articulamentum.
5. a kind of neonatal pain expression recognition method based on deep neural network, its feature exist according to claim 4 In:The convolutional neural networks are by inputting, included each layer is as follows successively:
First layer convolutional layer, using l1Individual k1×k1Convolution kernel carries out convolution operation for the facial expression image frame that resolution ratio is m × n, Obtain l1Individual resolution ratio is m1×n1Characteristic pattern, wherein,s1Table Show the convolution step-length of this layer of convolution kernel, INT () represents bracket function;
Second layer pond layer, uses pre-set dimension as p1×p1Sliding window, the characteristic pattern exported for last layer convolutional layer enters Row down-sampling, obtain l1Individual resolution ratio is m2×n2Characteristic pattern, wherein, s2Represent the sliding step of this layer of sliding window;
Third layer convolutional layer, using l2Individual k2×k2Convolution kernel carries out convolution operation for the characteristic pattern that last layer convolutional layer is exported, and obtains Obtain (l1×l2) individual resolution ratio is m3×n3Characteristic pattern;Wherein, s3Represent the convolution step-length of this layer of convolution kernel;
4th layer of pond layer, uses pre-set dimension as p2×p2Sliding window, the characteristic pattern exported for last layer convolutional layer carries out Down-sampling, obtain (l1×l2) individual resolution ratio is m4×n4Characteristic pattern, wherein, s4Represent the sliding step of this layer of sliding window;
Layer 5 convolutional layer, using l3Individual k3×k3Convolution kernel carries out convolution operation for the characteristic pattern that last layer pond layer is exported, and obtains (l1×l2×l3) individual resolution ratio is m5×n5Characteristic pattern;Wherein, s5Represent the convolution step-length of this layer of convolution kernel;
Layer 6 pond layer, uses pre-set dimension as p3×p3Sliding window, the characteristic pattern exported for last layer convolutional layer Down-sampling is carried out, obtains (l1×l2×l3) individual resolution ratio is m6×n6Characteristic pattern, wherein, s6Represent the sliding step of this layer of sliding window;
Layer 7 convolutional layer, using l4Individual k4×k4Convolution kernel carries out convolution operation for the characteristic pattern that last layer pond layer is exported, and obtains (l1×l2×l3×l4) individual resolution ratio is m7×n7Characteristic pattern;Wherein, s7Represent the convolution step-length of this layer of convolution kernel;
8th layer of full the articulamentum, (l that layer 7 convolutional layer is exported1×l2×l3×l4) individual resolution ratio is m7×n7Feature Figure connects into (l1×l2×l3×l4×m7×n7) dimension characteristic vector.
A kind of 6. neonatal pain Expression Recognition based on deep neural network according to any one in claims 1 to 3 Method, it is characterised in that:Memory network is by inputting in short-term for the length built in the step C, successively including predetermined number Recurrent neural net network layers and one layer classification layer, wherein, each recurrent neural net network layers are sequentially connected, finally with classify layer input It is connected.
7. a kind of neonatal pain expression recognition method based on deep neural network, its feature exist according to claim 6 In:The length in short-term for softmax classify layer by the classification layer in memory network.
8. a kind of neonatal pain expression recognition method based on deep neural network, its feature exist according to claim 7 In:Memory network is by inputting in short-term for the length, successively including Ψ recurrent neural net network layers and softmax classification layers, its In each recurrent neural net network layers include T long mnemons in short-term respectively, mnemon is included successively each length respectively in short-term Connected input gate, forget door, Cell and out gate;Softmax classification layers for passing through each recurrent neural net network layers successively Gained facial expression image frame is classified after processing, i.e., is completely directed to the identification of neonate's expression.
CN201710497593.0A 2017-06-27 2017-06-27 A kind of neonatal pain expression recognition method based on deep neural network Pending CN107392109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710497593.0A CN107392109A (en) 2017-06-27 2017-06-27 A kind of neonatal pain expression recognition method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710497593.0A CN107392109A (en) 2017-06-27 2017-06-27 A kind of neonatal pain expression recognition method based on deep neural network

Publications (1)

Publication Number Publication Date
CN107392109A true CN107392109A (en) 2017-11-24

Family

ID=60332733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710497593.0A Pending CN107392109A (en) 2017-06-27 2017-06-27 A kind of neonatal pain expression recognition method based on deep neural network

Country Status (1)

Country Link
CN (1) CN107392109A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256549A (en) * 2017-12-13 2018-07-06 北京达佳互联信息技术有限公司 Image classification method, device and terminal
CN108304823A (en) * 2018-02-24 2018-07-20 重庆邮电大学 A kind of expression recognition method based on two-fold product CNN and long memory network in short-term
CN108376220A (en) * 2018-02-01 2018-08-07 东巽科技(北京)有限公司 A kind of malice sample program sorting technique and system based on deep learning
CN108388890A (en) * 2018-03-26 2018-08-10 南京邮电大学 A kind of neonatal pain degree assessment method and system based on human facial expression recognition
CN108399409A (en) * 2018-01-19 2018-08-14 北京达佳互联信息技术有限公司 Image classification method, device and terminal
CN108549973A (en) * 2018-03-22 2018-09-18 中国平安人寿保险股份有限公司 Identification model is built and method, apparatus, storage medium and the terminal of assessment
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN108908353A (en) * 2018-06-11 2018-11-30 安庆师范大学 Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device
CN108921024A (en) * 2018-05-31 2018-11-30 东南大学 Expression recognition method based on human face characteristic point information Yu dual network joint training
CN108960122A (en) * 2018-06-28 2018-12-07 南京信息工程大学 A kind of expression classification method based on space-time convolution feature
CN109062404A (en) * 2018-07-20 2018-12-21 东北大学 A kind of interactive system and method applied to intelligent children's early learning machine
CN109063643A (en) * 2018-08-01 2018-12-21 中国科学院合肥物质科学研究院 A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part
CN109214412A (en) * 2018-07-12 2019-01-15 北京达佳互联信息技术有限公司 A kind of training method and device of disaggregated model
CN109558838A (en) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 A kind of object identification method and system
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
CN109766765A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Audio data method for pushing, device, computer equipment and storage medium
CN109949264A (en) * 2017-12-20 2019-06-28 深圳先进技术研究院 A kind of image quality evaluating method, equipment and storage equipment
CN110096940A (en) * 2018-01-29 2019-08-06 西安科技大学 A kind of Gait Recognition system and method based on LSTM network
CN110175505A (en) * 2019-04-08 2019-08-27 北京网众共创科技有限公司 Determination method, apparatus, storage medium and the electronic device of micro- expression type
CN110321827A (en) * 2019-06-27 2019-10-11 嘉兴深拓科技有限公司 A kind of pain level appraisal procedure based on face pain expression video
WO2019204700A1 (en) * 2018-04-19 2019-10-24 University Of South Florida Neonatal pain identification from neonatal facial expressions
CN110555379A (en) * 2019-07-30 2019-12-10 华南理工大学 human face pleasure degree estimation method capable of dynamically adjusting features according to gender
CN111046808A (en) * 2019-12-13 2020-04-21 江苏大学 Analysis method of drinking and playing waterer for raising pigs by adopting residual convolutional neural network and long-short term memory classification group
CN111401117A (en) * 2019-08-14 2020-07-10 南京邮电大学 Neonate pain expression recognition method based on double-current convolutional neural network
CN112741593A (en) * 2019-10-29 2021-05-04 重庆医科大学附属儿童医院 Pain assessment method, system, device and storage medium
CN113196410A (en) * 2018-09-07 2021-07-30 卢西恩公司 Systems and methods for pain treatment
WO2021243336A1 (en) * 2020-05-29 2021-12-02 West Virginia University Board of Governors on behalf of West Virginia University Evaluating pain of a user via time series of parameters from portable monitoring devices
US11202604B2 (en) 2018-04-19 2021-12-21 University Of South Florida Comprehensive and context-sensitive neonatal pain assessment system and methods using multiple modalities
WO2021188079A3 (en) * 2020-03-16 2021-12-30 Eski̇şehi̇r Osmangazi̇ Üni̇versi̇tesi̇ An artificial intelligence method to ensure the assessment the level of comfort and pain of babies

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks
CN106782602A (en) * 2016-12-01 2017-05-31 南京邮电大学 Speech-emotion recognition method based on length time memory network and convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782602A (en) * 2016-12-01 2017-05-31 南京邮电大学 Speech-emotion recognition method based on length time memory network and convolutional neural networks
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN106778657A (en) * 2016-12-28 2017-05-31 南京邮电大学 Neonatal pain expression classification method based on convolutional neural networks

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256549B (en) * 2017-12-13 2019-03-15 北京达佳互联信息技术有限公司 Image classification method, device and terminal
CN108256549A (en) * 2017-12-13 2018-07-06 北京达佳互联信息技术有限公司 Image classification method, device and terminal
CN109949264A (en) * 2017-12-20 2019-06-28 深圳先进技术研究院 A kind of image quality evaluating method, equipment and storage equipment
US11048983B2 (en) 2018-01-19 2021-06-29 Beijing Dajia Internet Information Technology Co., Ltd. Method, terminal, and computer storage medium for image classification
CN108399409A (en) * 2018-01-19 2018-08-14 北京达佳互联信息技术有限公司 Image classification method, device and terminal
WO2019141042A1 (en) * 2018-01-19 2019-07-25 北京达佳互联信息技术有限公司 Image classification method, device, and terminal
CN108399409B (en) * 2018-01-19 2019-06-18 北京达佳互联信息技术有限公司 Image classification method, device and terminal
CN110096940A (en) * 2018-01-29 2019-08-06 西安科技大学 A kind of Gait Recognition system and method based on LSTM network
CN108376220A (en) * 2018-02-01 2018-08-07 东巽科技(北京)有限公司 A kind of malice sample program sorting technique and system based on deep learning
CN108304823A (en) * 2018-02-24 2018-07-20 重庆邮电大学 A kind of expression recognition method based on two-fold product CNN and long memory network in short-term
CN108304823B (en) * 2018-02-24 2022-03-22 重庆邮电大学 Expression recognition method based on double-convolution CNN and long-and-short-term memory network
CN108549973A (en) * 2018-03-22 2018-09-18 中国平安人寿保险股份有限公司 Identification model is built and method, apparatus, storage medium and the terminal of assessment
CN108549973B (en) * 2018-03-22 2022-07-19 中国平安人寿保险股份有限公司 Identification model construction and evaluation method and device, storage medium and terminal
CN108388890A (en) * 2018-03-26 2018-08-10 南京邮电大学 A kind of neonatal pain degree assessment method and system based on human facial expression recognition
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
WO2019204700A1 (en) * 2018-04-19 2019-10-24 University Of South Florida Neonatal pain identification from neonatal facial expressions
US11202604B2 (en) 2018-04-19 2021-12-21 University Of South Florida Comprehensive and context-sensitive neonatal pain assessment system and methods using multiple modalities
CN108921024A (en) * 2018-05-31 2018-11-30 东南大学 Expression recognition method based on human face characteristic point information Yu dual network joint training
CN108908353B (en) * 2018-06-11 2021-08-13 安庆师范大学 Robot expression simulation method and device based on smooth constraint reverse mechanical model
CN108908353A (en) * 2018-06-11 2018-11-30 安庆师范大学 Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device
CN108960122A (en) * 2018-06-28 2018-12-07 南京信息工程大学 A kind of expression classification method based on space-time convolution feature
CN109214412A (en) * 2018-07-12 2019-01-15 北京达佳互联信息技术有限公司 A kind of training method and device of disaggregated model
CN109062404A (en) * 2018-07-20 2018-12-21 东北大学 A kind of interactive system and method applied to intelligent children's early learning machine
CN109062404B (en) * 2018-07-20 2020-03-24 东北大学 Interaction system and method applied to intelligent early education machine for children
CN109063643B (en) * 2018-08-01 2021-09-28 中国科学院合肥物质科学研究院 Facial expression pain degree identification method under condition of partial hiding of facial information
CN109063643A (en) * 2018-08-01 2018-12-21 中国科学院合肥物质科学研究院 A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part
CN113196410A (en) * 2018-09-07 2021-07-30 卢西恩公司 Systems and methods for pain treatment
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
CN109558838A (en) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 A kind of object identification method and system
CN109766765A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Audio data method for pushing, device, computer equipment and storage medium
CN110175505A (en) * 2019-04-08 2019-08-27 北京网众共创科技有限公司 Determination method, apparatus, storage medium and the electronic device of micro- expression type
CN110321827A (en) * 2019-06-27 2019-10-11 嘉兴深拓科技有限公司 A kind of pain level appraisal procedure based on face pain expression video
CN110555379A (en) * 2019-07-30 2019-12-10 华南理工大学 human face pleasure degree estimation method capable of dynamically adjusting features according to gender
CN110555379B (en) * 2019-07-30 2022-03-25 华南理工大学 Human face pleasure degree estimation method capable of dynamically adjusting features according to gender
CN111401117B (en) * 2019-08-14 2022-08-26 南京邮电大学 Neonate pain expression recognition method based on double-current convolutional neural network
CN111401117A (en) * 2019-08-14 2020-07-10 南京邮电大学 Neonate pain expression recognition method based on double-current convolutional neural network
CN112741593A (en) * 2019-10-29 2021-05-04 重庆医科大学附属儿童医院 Pain assessment method, system, device and storage medium
CN111046808A (en) * 2019-12-13 2020-04-21 江苏大学 Analysis method of drinking and playing waterer for raising pigs by adopting residual convolutional neural network and long-short term memory classification group
CN111046808B (en) * 2019-12-13 2024-03-22 江苏大学 Analysis method for pig raising drinking water and playing drinking water device by adopting residual convolution neural network and long-term and short-term memory classification group
WO2021188079A3 (en) * 2020-03-16 2021-12-30 Eski̇şehi̇r Osmangazi̇ Üni̇versi̇tesi̇ An artificial intelligence method to ensure the assessment the level of comfort and pain of babies
WO2021243336A1 (en) * 2020-05-29 2021-12-02 West Virginia University Board of Governors on behalf of West Virginia University Evaluating pain of a user via time series of parameters from portable monitoring devices

Similar Documents

Publication Publication Date Title
CN107392109A (en) A kind of neonatal pain expression recognition method based on deep neural network
Lin et al. An explainable deep fusion network for affect recognition using physiological signals
Lynn et al. A deep bidirectional GRU network model for biometric electrocardiogram classification based on recurrent neural networks
Jiang et al. EEG-based driver drowsiness estimation using an online multi-view and transfer TSK fuzzy system
Pan et al. A hierarchical hand gesture recognition framework for sports referee training-based EMG and accelerometer sensors
Li et al. EEG-based intention recognition with deep recurrent-convolution neural network: Performance and channel selection by Grad-CAM
CN107273845B (en) Facial expression recognition method based on confidence region and multi-feature weighted fusion
CN109614885A (en) A kind of EEG signals Fast Classification recognition methods based on LSTM
CN110070105A (en) Brain electricity Emotion identification method, the system quickly screened based on meta learning example
Li et al. EEG-based emotion recognition via neural architecture search
Li et al. Transfer learning-based muscle activity decoding scheme by low-frequency sEMG for wearable low-cost application
Zhang et al. Multiview unsupervised shapelet learning for multivariate time series clustering
Lin et al. A multi-scale activity transition network for data translation in EEG signals decoding
Abraham et al. Convolutional neural network for biomedical applications
Sun et al. Adv-emotion: The facial expression adversarial attack
Shukla et al. An Efficient Approach of Face Detection and Prediction of Drowsiness Using SVM
Bhatia et al. A LSTM-based approach for gait emotion recognition
Wang et al. A shallow convolutional neural network for classifying MI-EEG
Fonseca et al. Artificial neural networks applied to the classification of hand gestures using eletromyographic signals
Mittal et al. DL-ASD: A Deep Learning Approach for Autism Spectrum Disorder
Du et al. Multivariate time series classification based on fusion features
Atha et al. SSBTCNet: Semi-Supervised Brain Tumor Classification Network
Wang et al. An adaptive driver fatigue classification framework using EEG and attention-based hybrid neural network with individual feature subsets
Chathuramali et al. Real-time detection of the interaction between an upper-limb power-assist robot user and another person for perception-assist
de Sa An interactive control strategy is more robust to non-optimal classification boundaries

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171124