CN109543526A - True and false facial paralysis identifying system based on depth difference opposite sex feature - Google Patents
True and false facial paralysis identifying system based on depth difference opposite sex feature Download PDFInfo
- Publication number
- CN109543526A CN109543526A CN201811220859.8A CN201811220859A CN109543526A CN 109543526 A CN109543526 A CN 109543526A CN 201811220859 A CN201811220859 A CN 201811220859A CN 109543526 A CN109543526 A CN 109543526A
- Authority
- CN
- China
- Prior art keywords
- facial paralysis
- identification
- true
- training image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of true and false facial paralysis identifying systems based on depth difference opposite sex feature, comprising: training image obtains module and establishes training image collection for obtaining training image;Identification network establishes module, is trained for establishing identification network, and using the training image collection to identification network, obtains identification model;The identification network passes through the otherness information that double branch's convolutional neural networks extract input picture further feature, it recycles single branch's convolutional neural networks to carry out further feature to the otherness information to extract to obtain depth characteristic, the identification of true and false facial paralysis is carried out according to the otherness of these depth characteristics;Identification module is identified for obtaining images to be recognized using the identification model, to obtain recognition result.Recognition effect of the invention is good, can efficiently accomplish true and false facial paralysis identification, have important practical application value in clinical diagnosis.
Description
Technical field
The present invention relates to medical treatment and image identification technical fields, and in particular to a kind of based on the true and false of depth difference opposite sex feature
Facial paralysis identifying system.
Background technique
Facial paralysis is a kind of common disease, and morbidity range is very wide, not by age limit, can not only be caused centainly to the life of patient
Influence, while can also cause certain strike to its heart, drastically influence the physical and mental health of patient.With facial paralysis disease incidence
Be continuously increased, more and more scholars begin to focus on facial paralysis Study of recognition.
For the automatic identification for realizing facial paralysis, many scholars study this aspect both at home and abroad at present, they pass through
Static faces asymmetry and dynamic changes in faces are paid close attention to, the movement differential of key point is tracked, using deep learning method to key
Point is positioned, and is extracted the methods of feature using DCNN and is identified to facial paralysis.Researchers using distinct methods to face not
Symmetrical or exception is identified that achieve certain achievement, wherein the method for deep learning not only simplifies method and reaches
Higher accuracy rate.But these methods are determined all in accordance with the facial abnormal conditions of face, such as craniofacial asymmetry, more military
Disconnected is determined as facial paralysis for craniofacial asymmetry or exception.Not being there are some facial abnormal conditions in reality is that facial paralysis is suffered from
Person, we term it mask paralysis phenomenons for such case.Since researchers have ignored the presence of mask paralysis phenomenon, so that existing
There is erroneous judgement situation in facial paralysis recognition methods, there are certain deviations for the accuracy rate of recognition result, to reduce to a certain extent
The accuracy rate of facial paralysis identification.Therefore influence of the mask paralysis data to facial paralysis recognition effect, current facial paralysis Study of recognition are directed to
Field needs a kind of feasible automatic identification method to identify true and false facial paralysis.
Summary of the invention
The object of the present invention is to provide a kind of true and false facial paralysis identifying system based on depth difference opposite sex feature, which passes through
One Ge Shuan branch convolutional neural networks extract the further feature of different moments image, and open image according to extracted feature calculation two
Between feature difference;Single branch convolutional neural networks are recycled to extract the feature of deep layer difference characteristic, and true and false according to feature realization
Facial paralysis identification, to efficiently accomplish true and false facial paralysis identification.
In order to realize above-mentioned task, the invention adopts the following technical scheme:
A kind of true and false facial paralysis identifying system based on depth difference opposite sex feature, comprising:
Training image obtains module and establishes training image collection for obtaining training image;
Identification network establish module, for establish identification network, and using the training image collection to identification network into
Row training, obtains identification model;The identification network passes through double branch's convolutional neural networks and extracts input picture further feature
Otherness information, recycle single branch's convolutional neural networks to carry out further feature to the otherness information and extract
To depth characteristic, the identification of true and false facial paralysis is carried out according to the otherness of these depth characteristics;
Identification module is identified for obtaining images to be recognized using the identification model, to obtain identification knot
Fruit.
Further, the input of the identification network is two input pictures, is obtained by double branch's convolutional neural networks
The feature graphic sequence for taking two input pictures, two feature graphic sequences are merged respectively, obtain two it is fused
Then characteristic pattern constructs one for obtaining the metric function of different information using fused two characteristic patterns, passes through this
Metric function extracts the difference of described two characteristic patterns, obtains difference characteristic figure.
Further, the input of single branch's convolutional neural networks is difference characteristic figure, passes through convolution sum Chi Huacao
After the depth characteristic for making extraction difference characteristic figure, then by full articulamentum, result respectively is obtained by loss function.
Further, the acquisition training image, establishes training image collection, comprising:
Obtain initial data, including facial paralysis data, mask paralysis data, in which:
Facial paralysis data include the still image and video data of the different face action of facial paralysis patient, and mask paralysis data include
Normal person imitates the still image and video data of the different face action of facial paralysis patient;
For the video data of face action, the initial point position of video data septum reset movement is positioned, is obtained
The successive frame that face action occurs, and extract the intermediate frame of the successive frame as key frame images,
For the still image and key frame images of face action, carry out the following processing respectively:
Region detection detects and extracts image septum reset region, removes each background information unrelated with facial information, thus
Obtain training image;
Training image collection is established using training image, wherein the corresponding training image of facial paralysis data is as positive class, mask paralysis
The corresponding training image of data is as negative class.
Further, it when being trained using the training image collection to identification network, chooses training image and concentrates often
Two still images that kind of face action was acquired in different moments or key frame images are as one group of input picture, net for identification
The training of network.
Further, the acquisition images to be recognized, comprising:
Obtain the still image or video data of the face action of person under test;If what is obtained is video data, carry out
The extraction of key frame obtains key frame images, then to still image, key frame images carry out facial area extraction, obtain to
Identify image;
Using the same face action of person under test different moments two images to be recognized as one group of input picture.
Further, the first half of the identification network is double branches convolutional neural networks, and structure setting is
Wherein, conv is convolutional layer, and ReLU is
Activation primitive, pooling are pond layer;
The latter half of the identification network is single branch convolutional neural networks, structure setting are as follows:
Wherein conv is convolutional layer, and ReLU is sharp
Function living, pooling are pond layer, and LRN is normalized function, and fc is full articulamentum, and softmax is loss function.
Further, the loss function are as follows:
Wherein, K indicates classification, and j=1 ..., K-1, e is to data fetching number, zjFor the data of current class, zkFor certain
A kind of other data.
Further, in double branch's convolutional neural networks, the convolution kernel size of each layer of convolution is 3 × 3,
Step-length is defaulted as 1, and the pond size of pond layer is 2 × 2, step-length 2.
Further, in single branch's convolutional neural networks, first layer convolutional layer uses size for 11 × 11, step-length
For 4 convolution kernel;Second layer convolution kernel size is 5 × 5, and step-length is defaulted as 1;First two layers while carrying out convolution sum pond,
It is normalized using LRN;Third layer, the 4th layer, layer 5 convolution kernel size be 3 × 3, step-length is defaulted as 1;Each layer
Pond size is 3 × 3, step-length 2.
The present invention has following technical characterstic compared with prior art:
1. present system identifies facial paralysis using the method for deep learning, because convolutional neural networks adaptation is extremely strong,
It is good at mining data feature, is similar to biological neural network, and face characteristic has stronger self stability and individual difference
The opposite sex carries out face recognition using face biological characteristic, can be by study, to obtain for facial image rule stealth
A kind of expression avoids carrying out complicated feature extraction, improves discrimination while reducing complexity.Therefore the knowledge for facial paralysis
Not, the method for deep learning can after reach preferable recognition effect.Present system changes convolutional neural networks structure
Into being allowed to be suitable for true and false facial paralysis to identify, and reach higher recognition accuracy.
2. the present invention proposes to identify true and false facial paralysis using a binary channels difference neural network, depth difference opposite sex net
The design of network differentiates that twin neural network passes through double branches convolutional Neural from the picture similarity based on twin neural network
Network extracts the feature of two pictures respectively, the feature of two pictures is respectively mapped on a function, by calculating its Europe
Formula distance (loss function) minimizes the loss function from same category of a pair of sample, most during data training
Bigization determines image similarity from the loss function of different classes of a pair of sample.
3. present system utilizes the Some principles of twin neural network, network is improved, passes through convolution net respectively
Network extracts the further feature figure of a pair of of image, and the extraction of characteristic pattern remains the characteristic informations such as texture, the shape of image, compensates for
Twin network is identifying the defect in true and false facial paralysis, by calculating the characteristic pattern difference of two images, to obtain feature difference
Figure extracts the feature of depth difference feature finally by convolutional network and realizes that true and false facial paralysis is classified.
4. identification of the present system for true and false facial paralysis, focuses on the spies such as textural shape, position, the skin texture of human face five-sense-organ
Levy difference, not only solve automation facial paralysis identification in due to mask paralysis data presence caused by facial paralysis misjudgment phenomenon, and
And further improve facial paralysis identification accuracy rate, for facial paralysis clinical diagnosis provide it is a set of it is more accurate, convenient, efficiently
Automatic identification method.
Detailed description of the invention
Fig. 1 is that different moments facial paralysis motion images compare (eye closing);
Fig. 2 is facial paralysis patient part face action image;
Fig. 3 is mask paralysis part face action image;
Fig. 4 is the schematic diagram that video actions key frame is obtained using multi-stage cnns;
Fig. 5 is to carry out target detection schematic diagram using Faster R-CNN;
Fig. 6 is the structural schematic diagram for identifying network.
Specific embodiment
Analyzed, found by image to true and false facial paralysis and video data: facial paralysis patient is repeating doing a movement
When (such as: alarmming nose, show tooth, the drum cheek, close one's eyes), patient does the equal no significant difference of movement ninety-nine times out of a hundred, and right for mask paralysis
As repeating to do identical mask paralysis movement (normal person imitates facial paralysis patient motion) in different moments, movement front and back is often
It will appear notable difference, as shown in Figure 1.
According to above situation, it is believed that an important evidence for identifying true and false facial paralysis is the pre-post difference of different moments movement
It is different, it is mask paralysis there are greater probability, when fore-aft motion difference is smaller, greater probability is when fore-aft motion differs greatly
True face paralysis patient.Therefore, if it is desired to which the identification for reaching property true and false to measured data facial paralysis needs to pay close attention to its front and back action diagram twice
The difference of picture differentiates measurand according to its difference.
The invention proposes a kind of identifying systems, extract different moments image by a Ge Shuan branch convolutional neural networks
Further feature, and open according to extracted feature calculation two feature difference between image;Single branch convolutional neural networks are recycled to extract
The feature of deep layer difference characteristic, and realize that true and false facial paralysis identifies according to feature, there is important practical application in clinical diagnosis
Value.The system is specific as follows:
A kind of true and false facial paralysis identifying system based on depth difference opposite sex feature, comprising:
1. training image, which obtains module, establishes training image collection for obtaining training image;
Firstly, obtaining initial data, initial data here refers to facial paralysis data and mask paralysis data, such as Fig. 2 and Fig. 3
It is shown, in which:
Facial paralysis data include the still image and video data of the different face action of facial paralysis patient, and mask paralysis data include
Normal person imitates the still image and video data of the different face action of facial paralysis patient;In the present embodiment, the face is dynamic
Work has 7 kinds, respectively smiles, shows tooth, alarm nose, frown, lifting eyebrow, eye closing, the drum cheek.The still image refers to the figure of shooting
Picture, the video data refer to the video shot during making face action.It is suitable for present system to obtain
Training data, need to pre-process initial data, it is specific as follows:
(1) for the generation of one-off, the peak value of usual movement range is probably located at the interposition of movement generating process
It sets, therefore the present invention is used as using the intermediate frame of movement successive frame and tests key frame images used.Specifically, for face action
Video data, to video data septum reset movement initial point position position, obtain face action occur successive frame,
And extract the intermediate frame of the successive frame as key frame images, as shown in Figure 4.Localization method employed in the present embodiment is
Multi-stage CNNs method.
(2) it for the still image of face action and key frame images, carries out the following processing respectively:
Region detection, detects and extracts image (still image or key frame images) septum reset region, and removal is believed with face
Unrelated each background information is ceased, avoids the different information between the image generated because acquired data background is different as far as possible
To influence caused by experimental result, to obtain training image.
Training image collection is established using training image, wherein the corresponding training image of facial paralysis data is as positive class, mask paralysis
For the corresponding training image of data as negative class, every class data include 7 kinds of movements, and every kind of movement is chosen at two different moments institutes
The corresponding training image of two images (still image or key frame images) of acquisition, for identification training of network.
2. identification network establishes module
Identification network in this system establish module for establish identify network, as shown in fig. 6, by the network carry out very
The automatic identification of mask paralysis.
In the module, identification network is trained using the training image collection, obtains identification model;Described
It identifies that network passes through the otherness information that double branch's convolutional neural networks extract input picture further feature, recycles single branch's volume
Product neural network carries out further feature to the otherness information and extracts to obtain depth characteristic, according to these depth characteristics
Otherness carry out the identification of true and false facial paralysis;Specific step is as follows:
The first half that network is identified described in 2.1 is double branches convolutional neural networks, for one group of different moments phase
Two input pictures with movement extract depth characteristic figure, using the model of consolidated network identical parameters: 7 convolutional layers, two
Pond layer, while extracting the further feature of two pictures.The input of single branch's convolutional neural networks is difference characteristic figure, passes through volume
After the depth characteristic of difference characteristic figure is extracted in the operation of long-pending and pondization, then by full articulamentum, tied respectively by loss function
Fruit.
The structure setting of double branch's convolutional neural networks are as follows:
(conv+ReLU)+(conv+ReLU+pooling)+(conv+ReLU)
+(conv+ReLU+pooling)+(conv+ReLU)×3
Wherein, conv is convolutional layer, and ReLU is activation primitive, and pooling is pond layer;The convolution kernel of each layer of convolutional layer
Size is 3 × 3, and step-length is defaulted as 1, and the pond size of pond layer is 2 × 2, step-length 2.Pass through double branch's convolutional Neural nets
Network can obtain two feature graphic sequences.
In the present embodiment, the input of the identification network is two input pictures, passes through double branch's convolutional neural networks
The feature graphic sequence for obtaining two input pictures, two feature graphic sequences are merged respectively, after obtaining two fusions
Characteristic pattern, then construct one for obtaining the metric function of different information using fused two characteristic patterns, pass through this
A metric function extracts the difference of described two characteristic patterns, obtains difference characteristic figure.
In the present embodiment, two feature graphic sequences are merged, are indicated are as follows:
F (I)=(fm,1,fm,2,fm,3,…,fm,256)
Fm=(fm,1+fm,2+fm,3+…+fm,256)/256
Wherein, I indicates input picture, fm,1,fm,2,fm,3,…,fm,256Indicate that characteristic pattern, F (I) indicate feature graphic sequence,
FmIndicate fused characteristic pattern;Remember that the corresponding characteristic pattern of two input pictures is respectively F in the present embodimentm,1、Fm,2, then construct
Metric function fd(x)=DF=Fm,1-Fm,2, wherein DFIt indicates difference characteristic figure, two features is extracted by this metric function
The difference of figure obtains difference characteristic figure.
The latter half that network is identified described in 2.2 is single branch convolutional neural networks, by the difference for extracting depth characteristic
Different feature is to realize that true and false facial paralysis identification, including 5 convolutional layers, 3 pond layers, two normalization LRN are normalized.It is single
The structure setting of branch's convolutional neural networks are as follows:
(conv+ReLU+pooling+LRN)×2+(conv+ReLU)×2
+(conv+ReLU+pooling)+(fc+ReLU)×2+softmax
Wherein conv is convolutional layer, and ReLU is activation primitive, and pooling is pond layer, and LRN is normalized function, and fc is
Full articulamentum, softmax are loss function.First layer convolutional layer uses size for 11 × 11, the convolution kernel that step-length is 4;Second
Layer convolution kernel size is 5 × 5, and step-length is defaulted as 1;First two layers while carrying out convolution sum pond, utilize LRN carry out normalizing
Change;Third layer, the 4th layer, layer 5 convolution kernel size be 3 × 3, step-length is defaulted as 1;Each layer of pond size is 3 ×
3, step-length 2.Subnetwork input is difference characteristic figure, and the depth characteristic of difference characteristic figure is extracted by convolution sum pondization
Afterwards, then by 2 layers of full articulamentum, final classification result is exported using softmax.
The training of 2.3 identification networks
Identification network is trained using training image collection, wherein loss function is softmax function:
Wherein, K indicates classification, and j=1 ..., K-1, e is to data fetching number, zjFor the data of current class, zkFor certain
A kind of other data.In the present embodiment, j=1,2.In data training process, training image is concentrated into every two images, i.e., every kind
Two still images or key frame images that face action was acquired in different moments are trained realization as one group of input picture
Two classification of true and false facial paralysis.The step of network training are as follows:
2.3.1 data preparation: being divided into training set and checksum set for training image collection, and facial paralysis data markers are positive class,
Label is 0, and mask paralysis data markers are negative class, and label 1 generates txt tag directory file.
3.3.2 the data with label are generated into lmdb file, respectively train_lmdb, val_lmdb;Generate mean value
File mean.binaryproto.
3.3.3, the parameter of network profile solver.prototxt and train_val.prototxt are set.Using
Dropout and the mode of full articulamentum weight regularization prevent over-fitting, and edge filling has also first been carried out on training sample, and
Usable samples are screened from acquisition data, to guarantee the diversity and balance of sample, and data are normalized.Net is set
Network is two disaggregated models, and output classification number is 2, and exports trained accuracy rate.
3.3.4 after importing data, network model is trained using deep learning frame caffe, training result is protected
It deposits, obtains identification model.
3. identification module
Identification module is identified, to obtain recognition result for obtaining images to be recognized using the identification model.
Specifically:
3.1 obtain images to be recognized
Obtain the still image or video data of the face action of person under test;If what is obtained is video data, carry out
The extraction of key frame obtains key frame images, then carries out face using Faster R-CNN to still image, key frame images
Extracted region is split removal background information to facial area, obtains images to be recognized;Finally obtained images to be recognized
Having a size of 224 × 224 × 3.
Using the same face action of person under test different moments two images to be recognized as one group of input picture.
3.2 identification process
One group of input picture is input in identification model, two classification recognition results are obtained.The identification model
Output is possible probability of all categories, and the last highest classification of select probability value is as final label, to export prediction class
Not.
Identifying system of the invention can also include:
Display screen, touch screen, voice output device etc. can be used in output module, for carrying out display output to recognition result
And/or voice output.
Test portion
Acquire the still image and video data of 57 plane paralysis patient's difference face actions, and 106 normal persons of acquisition
The still image and video data of the facial paralysis that rehearses movement, amount to 2282 video clips, and 1033 still images are located in advance
Reason, obtains training image, therefrom chooses 700 pairs and is used as training image collection, 441 pairs are tested as test chart image set.
Different training sets is randomly selected from training image concentration to be trained, and model training 10 times is obtained multiple knowledges
Other model.
10 identification models are tested on test chart image set respectively, obtain the average value of experimental result, according to flat
Mean value determines the accuracy rate accurate rate of experimental result, homing rate and F1 value.Finally the recognition accuracy of present system is
89.67%, accurate rate 88.6%, homing rate 92%, F1 value is 90.16%.It is quasi- to obtain higher true and false facial paralysis identification
True rate.
Claims (10)
1. a kind of true and false facial paralysis identifying system based on depth difference opposite sex feature characterized by comprising
Training image obtains module and establishes training image collection for obtaining training image;
Identification network establishes module, instructs for establishing identification network, and using the training image collection to identification network
Practice, obtains identification model;The identification network passes through the difference that double branch's convolutional neural networks extract input picture further feature
Specific information recycles single branch's convolutional neural networks to carry out further feature to the otherness information and extracts to obtain depth
Feature is spent, the identification of true and false facial paralysis is carried out according to the otherness of these depth characteristics;
Identification module is identified for obtaining images to be recognized using the identification model, to obtain recognition result.
2. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
The input for identifying network is two input pictures, and the feature of two input pictures is obtained by double branch's convolutional neural networks
Graphic sequence merges two feature graphic sequences respectively, obtains two fused characteristic patterns, then utilizes fused two
A characteristic pattern constructs one for obtaining the metric function of different information, extracts described two characteristic patterns by this metric function
Difference, obtain difference characteristic figure.
3. as claimed in claim 2 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
The input of single branch's convolutional neural networks is difference characteristic figure, and the depth spy for extracting difference characteristic figure is operated by convolution sum pondization
After sign, then by full articulamentum, result respectively is obtained by loss function.
4. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
Training image is obtained, training image collection is established, comprising:
Obtain initial data, including facial paralysis data, mask paralysis data, in which:
Facial paralysis data include the still image and video data of the different face action of facial paralysis patient, and mask paralysis data include normal
People imitates the still image and video data of the different face action of facial paralysis patient;
For the video data of face action, the initial point position of video data septum reset movement is positioned, face is obtained
The successive frame occurred is acted, and extracts the intermediate frame of the successive frame as key frame images,
For the still image and key frame images of face action, carry out the following processing respectively:
Region detection detects and extracts image septum reset region, each background information unrelated with facial information is removed, to obtain
Training image;
Training image collection is established using training image, wherein the corresponding training image of facial paralysis data is as positive class, mask paralysis data
Corresponding training image is as negative class.
5. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that utilize institute
When the training image collection stated is trained identification network, chooses training image and every kind of face action is concentrated to acquire in different moments
Two still images or key frame images as one group of input picture, for identification training of network.
6. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
Obtain images to be recognized, comprising:
Obtain the still image or video data of the face action of person under test;If what is obtained is video data, key is carried out
The extraction of frame obtains key frame images, then carries out facial area extraction to still image, key frame images, obtains to be identified
Image;
Using the same face action of person under test different moments two images to be recognized as one group of input picture.
7. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
The first half for identifying network is double branches convolutional neural networks, and structure setting is
Wherein, conv is convolutional layer, and ReLU is
Activation primitive, pooling are pond layer;
The latter half of the identification network is single branch convolutional neural networks, structure setting are as follows:
Wherein conv is convolutional layer, and ReLU is activation
Function, pooling are pond layer, and LRN is normalized function, and fc is full articulamentum, and softmax is loss function.
8. as claimed in claim 7 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
Loss function are as follows:
Wherein, K indicates classification, and j=1 ..., K-1, e is to data fetching number, zjFor the data of current class, zkFor a certain classification
Data.
9. as claimed in claim 7 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
In double branch's convolutional neural networks, the convolution kernel size of each layer of convolution is 3 × 3, and step-length is defaulted as 1, the pond of pond layer
Size is 2 × 2, step-length 2.
10. as described in claim 1 based on the true and false facial paralysis identifying system of depth difference opposite sex feature, which is characterized in that described
Single branch's convolutional neural networks in, first layer convolutional layer uses size for 11 × 11, the convolution kernel that step-length is 4;Second layer volume
Product core size is 5 × 5, and step-length is defaulted as 1;It while carrying out convolution sum pond, is normalized using LRN for first two layers;The
Three layers, the 4th layer, layer 5 convolution kernel size be 3 × 3, step-length is defaulted as 1;Each layer of pond size is 3 × 3, step
A length of 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811220859.8A CN109543526B (en) | 2018-10-19 | 2018-10-19 | True and false facial paralysis recognition system based on depth difference characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811220859.8A CN109543526B (en) | 2018-10-19 | 2018-10-19 | True and false facial paralysis recognition system based on depth difference characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109543526A true CN109543526A (en) | 2019-03-29 |
CN109543526B CN109543526B (en) | 2022-11-08 |
Family
ID=65844204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811220859.8A Active CN109543526B (en) | 2018-10-19 | 2018-10-19 | True and false facial paralysis recognition system based on depth difference characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543526B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689081A (en) * | 2019-09-30 | 2020-01-14 | 中国科学院大学 | Weak supervision target classification and positioning method based on bifurcation learning |
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111428800A (en) * | 2020-03-30 | 2020-07-17 | 南京工业大学 | Tea true-checking method based on 0-1 model |
CN111680545A (en) * | 2020-04-25 | 2020-09-18 | 深圳德技创新实业有限公司 | Semantic segmentation based accurate facial paralysis degree evaluation method and device |
CN111967344A (en) * | 2020-07-28 | 2020-11-20 | 南京信息工程大学 | Refined feature fusion method for face forgery video detection |
CN112001213A (en) * | 2020-04-25 | 2020-11-27 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on 3D point cloud segmentation |
CN112365462A (en) * | 2020-11-06 | 2021-02-12 | 华雁智科(杭州)信息技术有限公司 | Image-based change detection method |
CN112597842A (en) * | 2020-12-15 | 2021-04-02 | 周美跃 | Movement detection facial paralysis degree evaluation system based on artificial intelligence |
CN113080855A (en) * | 2021-03-30 | 2021-07-09 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113379685A (en) * | 2021-05-26 | 2021-09-10 | 广东炬森智能装备有限公司 | PCB defect detection method and device based on dual-channel feature comparison model |
CN113420737A (en) * | 2021-08-23 | 2021-09-21 | 成都飞机工业(集团)有限责任公司 | 3D printing pattern recognition method based on convolutional neural network |
WO2023040146A1 (en) * | 2021-09-17 | 2023-03-23 | 平安科技(深圳)有限公司 | Behavior recognition method and apparatus based on image fusion, and electronic device and medium |
WO2023051563A1 (en) * | 2021-09-28 | 2023-04-06 | 北京百度网讯科技有限公司 | Adhesion detection model training method, adhesion detection method, and related apparatuses |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825168A (en) * | 2016-02-02 | 2016-08-03 | 西北大学 | Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD |
CN106250819A (en) * | 2016-07-20 | 2016-12-21 | 上海交通大学 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
US20170256033A1 (en) * | 2016-03-03 | 2017-09-07 | Mitsubishi Electric Research Laboratories, Inc. | Image Upsampling using Global and Local Constraints |
WO2017177661A1 (en) * | 2016-04-15 | 2017-10-19 | 乐视控股(北京)有限公司 | Convolutional neural network-based video retrieval method and system |
CN107292256A (en) * | 2017-06-14 | 2017-10-24 | 西安电子科技大学 | Depth convolved wavelets neutral net expression recognition method based on secondary task |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
WO2018068416A1 (en) * | 2016-10-14 | 2018-04-19 | 广州视源电子科技股份有限公司 | Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device |
CN108363979A (en) * | 2018-02-12 | 2018-08-03 | 南京邮电大学 | Neonatal pain expression recognition method based on binary channels Three dimensional convolution neural network |
CN108416780A (en) * | 2018-03-27 | 2018-08-17 | 福州大学 | A kind of object detection and matching process based on twin-area-of-interest pond model |
CN108447057A (en) * | 2018-04-02 | 2018-08-24 | 西安电子科技大学 | SAR image change detection based on conspicuousness and depth convolutional network |
CN108491835A (en) * | 2018-06-12 | 2018-09-04 | 常州大学 | Binary channels convolutional neural networks towards human facial expression recognition |
WO2018157862A1 (en) * | 2017-03-02 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Vehicle type recognition method and device, storage medium and electronic device |
-
2018
- 2018-10-19 CN CN201811220859.8A patent/CN109543526B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825168A (en) * | 2016-02-02 | 2016-08-03 | 西北大学 | Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD |
US20170256033A1 (en) * | 2016-03-03 | 2017-09-07 | Mitsubishi Electric Research Laboratories, Inc. | Image Upsampling using Global and Local Constraints |
WO2017177661A1 (en) * | 2016-04-15 | 2017-10-19 | 乐视控股(北京)有限公司 | Convolutional neural network-based video retrieval method and system |
CN106250819A (en) * | 2016-07-20 | 2016-12-21 | 上海交通大学 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
WO2018068416A1 (en) * | 2016-10-14 | 2018-04-19 | 广州视源电子科技股份有限公司 | Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
WO2018157862A1 (en) * | 2017-03-02 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Vehicle type recognition method and device, storage medium and electronic device |
CN107292256A (en) * | 2017-06-14 | 2017-10-24 | 西安电子科技大学 | Depth convolved wavelets neutral net expression recognition method based on secondary task |
CN108363979A (en) * | 2018-02-12 | 2018-08-03 | 南京邮电大学 | Neonatal pain expression recognition method based on binary channels Three dimensional convolution neural network |
CN108416780A (en) * | 2018-03-27 | 2018-08-17 | 福州大学 | A kind of object detection and matching process based on twin-area-of-interest pond model |
CN108447057A (en) * | 2018-04-02 | 2018-08-24 | 西安电子科技大学 | SAR image change detection based on conspicuousness and depth convolutional network |
CN108491835A (en) * | 2018-06-12 | 2018-09-04 | 常州大学 | Binary channels convolutional neural networks towards human facial expression recognition |
Non-Patent Citations (4)
Title |
---|
何志超等: "用于人脸表情识别的多分辨率特征融合卷积神经网络", 《激光与光电子学进展》 * |
李佳妮等: "特征匹配融合结合改进卷积神经网络的人脸识别", 《激光与光电子学进展》 * |
李思泉等: "基于卷积神经网络的人脸表情识别研究", 《软件导刊》 * |
胡正平等: "多层次深度网络融合人脸识别算法", 《模式识别与人工智能》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689081B (en) * | 2019-09-30 | 2020-08-21 | 中国科学院大学 | Weak supervision target classification and positioning method based on bifurcation learning |
CN110689081A (en) * | 2019-09-30 | 2020-01-14 | 中国科学院大学 | Weak supervision target classification and positioning method based on bifurcation learning |
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111428800A (en) * | 2020-03-30 | 2020-07-17 | 南京工业大学 | Tea true-checking method based on 0-1 model |
CN111428800B (en) * | 2020-03-30 | 2023-07-18 | 南京工业大学 | Tea verification method based on 0-1 model |
CN111680545A (en) * | 2020-04-25 | 2020-09-18 | 深圳德技创新实业有限公司 | Semantic segmentation based accurate facial paralysis degree evaluation method and device |
CN112001213A (en) * | 2020-04-25 | 2020-11-27 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on 3D point cloud segmentation |
CN112001213B (en) * | 2020-04-25 | 2024-04-12 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on 3D point cloud segmentation |
CN111967344B (en) * | 2020-07-28 | 2023-06-20 | 南京信息工程大学 | Face fake video detection oriented refinement feature fusion method |
CN111967344A (en) * | 2020-07-28 | 2020-11-20 | 南京信息工程大学 | Refined feature fusion method for face forgery video detection |
CN112365462A (en) * | 2020-11-06 | 2021-02-12 | 华雁智科(杭州)信息技术有限公司 | Image-based change detection method |
CN112597842A (en) * | 2020-12-15 | 2021-04-02 | 周美跃 | Movement detection facial paralysis degree evaluation system based on artificial intelligence |
CN112597842B (en) * | 2020-12-15 | 2023-10-20 | 芜湖明瞳数字健康科技有限公司 | Motion detection facial paralysis degree evaluation system based on artificial intelligence |
CN113080855B (en) * | 2021-03-30 | 2023-10-31 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113080855A (en) * | 2021-03-30 | 2021-07-09 | 广东省科学院智能制造研究所 | Facial pain expression recognition method and system based on depth information |
CN113379685A (en) * | 2021-05-26 | 2021-09-10 | 广东炬森智能装备有限公司 | PCB defect detection method and device based on dual-channel feature comparison model |
CN113420737A (en) * | 2021-08-23 | 2021-09-21 | 成都飞机工业(集团)有限责任公司 | 3D printing pattern recognition method based on convolutional neural network |
WO2023040146A1 (en) * | 2021-09-17 | 2023-03-23 | 平安科技(深圳)有限公司 | Behavior recognition method and apparatus based on image fusion, and electronic device and medium |
WO2023051563A1 (en) * | 2021-09-28 | 2023-04-06 | 北京百度网讯科技有限公司 | Adhesion detection model training method, adhesion detection method, and related apparatuses |
Also Published As
Publication number | Publication date |
---|---|
CN109543526B (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543526A (en) | True and false facial paralysis identifying system based on depth difference opposite sex feature | |
Hernandez-Ortega et al. | Deepfakeson-phys: Deepfakes detection based on heart rate estimation | |
Wang et al. | MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos | |
Yu et al. | Image quality classification for DR screening using deep learning | |
Liao et al. | Deep facial spatiotemporal network for engagement prediction in online learning | |
CN110503081A (en) | Act of violence detection method, system, equipment and medium based on inter-frame difference | |
Bu | Human motion gesture recognition algorithm in video based on convolutional neural features of training images | |
CN111914643A (en) | Human body action recognition method based on skeleton key point detection | |
CN110189305A (en) | A kind of multitask tongue picture automatic analysis method | |
CN110037693A (en) | A kind of mood classification method based on facial expression and EEG | |
CN109063643A (en) | A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part | |
CN109063626A (en) | Dynamic human face recognition methods and device | |
Borgalli et al. | Deep learning for facial emotion recognition using custom CNN architecture | |
Perikos et al. | Recognizing emotions from facial expressions using neural network | |
Wang et al. | Learning to augment expressions for few-shot fine-grained facial expression recognition | |
Rao et al. | Facial expression recognition with multiscale graph convolutional networks | |
Jang et al. | Facial attribute recognition by recurrent learning with visual fixation | |
Ullah et al. | Emotion recognition from occluded facial images using deep ensemble model. | |
Khan et al. | Enhanced Deep Learning Hybrid Model of CNN Based on Spatial Transformer Network for Facial Expression Recognition | |
Hassan et al. | SIPFormer: Segmentation of multiocular biometric traits with transformers | |
Esmaeili et al. | Spotting micro‐movements in image sequence by introducing intelligent cubic‐LBP | |
Boncolmo et al. | Gender Identification Using Keras Model Through Detection of Face | |
Ruan et al. | Facial expression recognition in facial occlusion scenarios: A path selection multi-network | |
CN115953822A (en) | Face video false distinguishing method and device based on rPPG physiological signal | |
George et al. | Real-time deep learning based system to detect suspicious non-verbal gestures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |