CN110069994A - Face character identifying system, method based on face multizone - Google Patents
Face character identifying system, method based on face multizone Download PDFInfo
- Publication number
- CN110069994A CN110069994A CN201910210915.8A CN201910210915A CN110069994A CN 110069994 A CN110069994 A CN 110069994A CN 201910210915 A CN201910210915 A CN 201910210915A CN 110069994 A CN110069994 A CN 110069994A
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- global
- region
- multizone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000001815 facial effect Effects 0.000 claims abstract description 64
- 230000004927 fusion Effects 0.000 claims abstract description 50
- 238000000605 extraction Methods 0.000 claims abstract description 45
- 239000000284 extract Substances 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 6
- 238000005520 cutting process Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000007787 long-term memory Effects 0.000 claims description 6
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to field of face identification, more particularly to a kind of face character identifying system, method based on face multizone, it aims to solve the problem that and solves the problems, such as that the face character identification of individual Global Face image ignores that detailed information bring discrimination is not high, and present system includes pretreatment unit, face characteristic extraction unit, predicting unit;Pretreatment unit carries out face critical point detection to face in input picture, carries out the alignment and cutting of facial image, obtains the first image and its corresponding face key point;Face characteristic extraction unit extracts Global Face feature, local facial feature, fusion face characteristic;The Global Face feature, the local facial feature, the fusion face characteristic are carried out splicing and obtain final characteristic of division by predicting unit, and carry out Attribute Recognition based on specific attributive classification device.The present invention can extract more effective face character feature based on face multizone, obtain more accurately face character and estimate.
Description
Technical field
The invention belongs to field of face identification, and in particular to a kind of face character identifying system based on face multizone,
Method.
Background technique
Usual face character analysis tends to ignorance office often using individual Global Face image as input in this way
Portion, details information, and these detailed information to analysis face character be also it is vital, such as when analyzing the face age,
The wrinkle at canthus, beard also contain age information abundant;For expression attribute, when smiling, the corners of the mouth often raises up people.Institute
All be when analyzing face character with, global and local human face region information it is highly important, how to comprehensively utilize global drawn game
Portion's human face region information carries out face character identification and improves discrimination to be this field letter problem to be solved.
Summary of the invention
In order to solve the above problem in the prior art, in order to solve the face character identification of individual Global Face image
Ignore the not high problem of detailed information bring discrimination, first aspect of the present invention it is proposed a kind of be based on face multizone
Face character identifying system, which includes pretreatment unit, face characteristic extraction unit, predicting unit;
The pretreatment unit is configured to carry out face critical point detection to face in input picture, and based on acquisition
Face key point position carries out the alignment and cutting of facial image, obtains the first image and its corresponding face key point;
The face characteristic extraction unit is configured to the first image and corresponding face key point, extracts based on complete
The Global Face feature of office's human face region, is based on global and local face area at the local facial feature based on local facial region
The fusion face characteristic in domain;
The predicting unit is configured to the Global Face feature, the local facial feature, the fusion face is special
Sign carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
In some preferred embodiments, the face characteristic extraction unit includes low-level feature abstraction module, multizone
Alignment feature generation module, multi-region feature abstraction module, Fusion Features module;
The low-level feature abstraction module includes one or more convolutional layers, which is configured to low to the first image contract
The neural network characteristics of layer;
The multizone alignment feature generation module includes multiple aligned region ponds layer, which is configured to described
The neural network characteristics for the low layer that low-level feature abstraction module extracts obtain Global Face region, local facial region;
The multi-region feature abstraction module include Global Face feature extraction branch, local facial feature's extracting branch,
Merge face characteristic extracting branch;The Global Face feature extraction branch configuration is the Global Face extracted region based on acquisition
Global Face feature;Local facial feature's extracting branch is configured to the local facial region obtained and extracts local facial
Feature, the fusion face characteristic extracting branch, which is configured to extract Global Face feature extraction branch, local facial feature, divides
Branch respective subnet network intermediate features carry out splicing and obtain fusion face characteristic;
The Fusion Features module, is configured to shot and long term memory network, in the multi-region feature abstraction module
Multiple features that each branch obtains distinguish Fusion Features, obtain fused Global Face feature, local facial feature, melt
Close face characteristic.
In some preferred embodiments, Feature fusion in the Fusion Features module are as follows:
The hidden state of shot and long term memory network in respective branches is obtained, and obtained hidden state is summed.
In some preferred embodiments, the predicting unit includes merging features module, attribute forecast module;
It is special to be configured to the fused Global Face for exporting the face characteristic extraction unit for the merging features module
Sign, local facial feature, fusion face characteristic splice, and obtain final classification feature;
The attribute forecast module, configuration carry out attribute to the final classification feature based on specific attributive classification device B
Identification.
In some preferred embodiments, the attributive classification device B is character classification by age device, passes through the classifier prediction result
For
y′i=∑jj·pj(xi)
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xi?
The probability of j-th of classification of corresponding attribute.
In some preferred embodiments, the attributive classification device B is gender sorter or expression classifier, passes through this point
Class device prediction result is
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xi?
The probability of j-th of classification of corresponding attribute.
In some preferred embodiments, the face characteristic extraction unit, the predicting unit are joined by training
Number optimizes, and in training process, melts in the multi-region feature abstraction module of the face characteristic extraction unit and the feature
Attributive classification device A corresponding with attributive classification device B is had additional between molding block, for obtaining the multi-region feature abstraction module
The class probability of the setting attribute of each face characteristic of output.
In some preferred embodiments, the face characteristic extraction unit, predicting unit training loss function L
For
Wherein, n indicates training set sample size, yiIndicate the real property label of i-th of sample, φ={ g, l, c } table
Show Global Face feature, local facial feature and the set for merging face characteristic that the multi-region feature abstraction module obtains, k
For branch's number of the multi-region feature abstraction module, xiFor the final classification feature of i-th of sample,For i-th of sample
In branch ζkIn final classification feature (wherein ζ can be any one of for set φ={ g, l, c }), pyi(xi) it is final
I-th of sample is divided into age yiProbability,To be i-th of sample in branch ζkIn be divided into age yiIt is general
Rate.
In some preferred embodiments, using random in the face characteristic extraction unit, predicting unit training
Gradient descent method algorithm updates network parameter.
The second aspect of the present invention proposes a kind of face character recognition methods based on face multizone, based on above-mentioned
The face character identifying system based on face multizone, method includes the following steps:
Step S10 is handled input picture using the pretreatment unit, obtains the first image and its corresponding people
Face key point;
Step S20 is based on the first image and corresponding face key point, by the face characteristic extraction unit, extracts
Global Face feature based on Global Face region, the local facial feature based on local facial region, based on global and local
The fusion face characteristic of human face region;
Step S30, will be described in the Global Face feature, the local facial feature, fusion face characteristic input
Predicting unit carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
Beneficial effects of the present invention:
The present invention considers the face information of global, local and global and local fusion, is extracted based on face multizone
More effective face character feature can obtain more accurate face character estimation.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is the face character identifying system block schematic illustration based on face multizone of an embodiment of the present invention;
Fig. 2 is the face character identifying system network structure signal based on face multizone of an embodiment of the present invention
Figure;
Fig. 3 is aligned region pond layer (ARP) schematic diagram in an embodiment of the present invention;
Fig. 4 is that ARP aligned region feature generates schematic diagram in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to the embodiment of the present invention
In technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, without
It is whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work
Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.
Face character recognition methods proposed by the present invention based on face multizone, while extracting global, part and the overall situation-
The feature of local facial region, while global and local face information is captured, face character analysis performance is promoted with this.
A kind of face character identifying system based on face multizone of the invention, including pretreatment unit, face characteristic
Extraction unit, predicting unit;
The pretreatment unit is configured to carry out face critical point detection to face in input picture, and based on acquisition
Face key point position carries out the alignment and cutting of facial image, obtains the first image and its corresponding face key point;
The face characteristic extraction unit is configured to the first image and corresponding face key point, extracts based on complete
The Global Face feature of office's human face region, is based on global and local face area at the local facial feature based on local facial region
The fusion face characteristic in domain;
The predicting unit is configured to the Global Face feature, the local facial feature, the fusion face is special
Sign carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
A kind of face character recognition methods based on face multizone of the invention, based on above-mentioned based on face multizone
Face character identifying system, method includes the following steps:
Step S10 is handled input picture using the pretreatment unit, obtains the first image and its corresponding people
Face key point;
Step S20 is based on the first image and corresponding face key point, by the face characteristic extraction unit, extracts
Global Face feature based on Global Face region, the local facial feature based on local facial region, based on global and local
The fusion face characteristic of human face region;
Step S30, will be described in the Global Face feature, the local facial feature, fusion face characteristic input
Predicting unit carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
In order to which more clearly the present invention will be described, with reference to the accompanying drawing our invention each section be unfolded in detail
It states.
The face character identifying system based on face multizone of an embodiment of the present invention, as shown in Figure 1, including pre- place
Manage unit, face characteristic extraction unit, predicting unit.
1, pretreatment unit
The unit is configured to carry out face in input picture face critical point detection, and the face key point based on acquisition
Position carries out the alignment and cutting of facial image, obtains the first image and its corresponding face key point.
For an input picture, pre-treatment step are as follows:
Step 101, detect whether it includes face first, the picture is abandoned if not including face, under otherwise entering
One step;
Step 102, face critical point detection is carried out to the face that step 101 detects;
Step 103, the face key point position obtained according to step 102 be aligned and cut out image to facial image
It is cut into preset size (such as 224*224), the image after output alignment.
2, face characteristic extraction unit
The unit is configured to the first image and corresponding face key point, extracts the overall situation based on Global Face region
Face characteristic, the local facial feature based on local facial region, the fusion face characteristic based on global and local human face region.
In the embodiment, face characteristic extraction unit is as shown in Fig. 2, include low-level feature abstraction module, multizone alignment
Feature generation module, multi-region feature abstraction module, Fusion Features module.
2.1 low-level feature abstraction modules
The module includes one or more convolutional layers (as shown in Fig. 2, these convolutional layers are expressed asThe module is matched
It is set to the neural network characteristics to the first image contract low layer.The input of the module is the figure after the alignment of pretreatment unit output
As Ii。
2.2 multizone alignment feature generation modules
The module includes multiple aligned region ponds layer (Aligned Region Pooling, ARP), which is configured to
Neural network characteristics based on the low layer that the low-level feature abstraction module extracts, are generated more respectively by aligned region pond layer
A human face region to it, including Global Face region, local facial region.
ARP chooses and is aligned specific human face region by some specific face key points.ARP operation chart is such as
Shown in Fig. 3.The specific implementation of ARP operation is told about below.
The operation of ARP included in the embodiment of the present invention is the alignment that specific region feature is realized based on affine transformation.
It is assumed that input key point is Rs={ p1,…,pmAnd target critical point be Rt={ q1,…,qm, wherein piIndicate i-th of key
Point, m indicate the number of key point.Given RsAnd Rt, affine transformationIn parameter matrix MAIt can be equal by minimizing
Square error obtains, as shown in formula (1):
Optimization above-mentioned formula (1) obtains parameter matrix MALater, for input feature vector Xs, the provincial characteristics Xt after alignment is
In multizone alignment feature generation module in the present embodiment, 5 ARP operations are used altogether to generate 2
Global Face provincial characteristics and 3 local facial region's features (actually use three global people in embodiments of the present invention
Face region, but due to be aligned to facial image in advance in pretreatment unit, so first Global Face
Region no longer needs to be aligned using ARP operation), as shown in Figure 4.Two Global Face provincial characteristics that ARP is generated use
Five key points are aligned, respectively two centers, and nose and two corners of the mouths, the position of target critical point is also such as Fig. 4 institute
Show (the target critical point position in two regions is different, and the target critical point of one of them is more compacter).In addition, ARP points
Safety pin produces three local facial region's features (as shown in figure 4, left eye alignment uses to left eye, nose and mouth region
Two left eye canthus, eye center, four, left eyebrow center key point;Nose alignment uses two centers, place between the eyebrows and four, nose
Key point;Mouth alignment uses nose, two corners of the mouths and four, mouth center key point).
2.3 multi-region feature abstraction modules
The module includes Global Face feature extraction branch, local facial feature's extracting branch, fusion face characteristic extraction
Branch;The Global Face feature extraction branch configuration is the Global Face extracted region Global Face feature based on acquisition;Institute
It states local facial feature's extracting branch and is configured to the local facial region obtained extraction local facial feature, the fusion people
Face feature extraction branch configuration be to Global Face feature extraction branch, local facial feature's extracting branch respective subnet network among
Feature carries out splicing and obtains fusion face characteristic.
The schematic diagram of multi-region feature abstraction module is shown in Fig. 2.This feature abstraction module includes three branches: the overall situation point
Branch, localized branches and the overall situation-localized branches, wherein it (is 3 subnets in the present embodiment that each branch, which contains multiple sub-networks,
Network) extract the face characteristic of each region.Specifically, each sub-network acts on multizone alignment spy in global branch
In the global area feature for levying generation module, global characteristics are generated using a series of convolutional layer, pond layer, full articulamentumWherein gkIndicate k-th of global sub-network, i indicates i-th input picture;Each sub-network acts in localized branches
In the local features of multizone alignment feature generation module, produced using a series of convolutional layer, pond layer, full articulamentum
Raw local featureWherein lkIndicate k-th of global sub-network, i indicates i-th input picture;The input of the overall situation-localized branches
Slightly different with the input of front Liang Ge branch, input is respective subnet network intermediate features in global and local Liang Ge branch
Splice feature (global branch neutron network number is identical, can splice correspondingly), specific visible Fig. 2, global branch first
Sub-network and the first automatic network of localized branches splice to obtain first splicing feature of the overall situation-localized branches, the overall situation-localized branches
Second, four or three splicing features obtain by the same method.In Fig. 2,Respectively indicate global, office
The ν convolutional network block in k-th of sub-network in portion and the overall situation-localized branches, such asIndicate the 1st son in global branch
2nd convolutional network block in network. The meaning of expression with
This analogizes.
2.4 Fusion Features modules
The module is configured to shot and long term memory network, obtains to each branch in the multi-region feature abstraction module
The multiple features difference Fusion Features arrived, obtain fused Global Face feature, local facial feature, fusion face characteristic.
In view of the correlation between face each region, in this module, using shot and long term memory network (Long
Short-Term Memory, LSTM) excavate the correlation between different zones feature.For multi-region feature abstraction module
In each branch, be all made of a LSTM network.By taking global branch as an example, hidden state can be according to formula in LSTM network
(2) it obtains:
That is k-th of hidden stateIt is input global area featureWith a upper hidden stateIt is input to
Coding obtains in LSTM unit.After coding obtains all hidden states, fusion feature can be obtained by sum operation, such as public
Shown in formula (3):
Equally, we can also be excavated in localized branches and the overall situation-localized branches using LSTM between different zones
Relationship, and obtain localized branches fusion featureWith the overall situation-localized branches fusion feature
3, predicting unit
The unit is configured to carry out the Global Face feature, the local facial feature, the fusion face characteristic
Splicing obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
Predicting unit includes merging features module, attribute forecast module;
Merging features module, be configured to by the face characteristic extraction unit export fused Global Face feature,
Local facial feature, fusion face characteristic splice, and obtain final classification feature;
Attribute forecast module, configuration carry out Attribute Recognition to the final classification feature based on specific attributive classification device B.
In the present embodiment, two attributive classification device modules: attributive classification device modules A and attributive classification device module are used
B.Attributive classification device modules A only uses in the training process, and main function is that allow each sub-network all to acquire oneself independent
Feature, so that final fusion feature is more abundant.In forecast period, attributive classification device modules A is not used, final attribute
Prediction is also only generated by attributive classification device module B.I.e. in the training process in the multi-region of the face characteristic extraction unit
Attributive classification device A corresponding with attributive classification device B is had additional between characteristic of field abstraction module and the Fusion Features module, is being instructed
After perfecting, the reserved property classifier B in the face character identifying system based on face multizone.
Attributive classification device modules A: each sub-network of each branch of multi-region feature abstraction module is followed by one
Softmax attributive classification device.By taking k-th of sub-network in global branch as an example, for some attribute (such as age or expression)
J-th of classification, Probability pj(hiGk) as shown in formula (4):
WhereinIndicate the human face region feature that k-th of branch extracts in global branch,Indicate softmax classification
The parameter of j-th of classification in device.
Attributive classification device module B: global characteristics are being obtainedLocal featureWith the overall situation-local featureLater, lead to
The merging features operation for crossing merging features module is finally divided to splice global characteristics, local feature and the overall situation-local feature
Category feature xi, as shown in formula (5):
Attribute transposition is carried out by the softmax classifier of attributive classification device module B, such as character classification by age either expression
Identification.For j-th of classification of some attribute (such as age or expression), shown in probability such as formula (6):
Wherein wjIndicate the parameter of j-th of classification in softmax classifier.
In some embodiments, attributive classification device B is character classification by age device, that maximum classification of output probability is to predict
Classification, as shown in formula (7)
y′i=∑jj·pj(xi) (7)
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xi?
The probability of j-th of classification of corresponding attribute.
In further embodiments, attributive classification device B is gender sorter or expression classifier, using the desired behaviour of solution
Make to obtain the final prediction age, as shown in formula (8)
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xi?
The probability of j-th of classification of corresponding attribute.
In order to reach better recognition effect, face characteristic extraction unit, predicting unit in the embodiment of the present invention need
By training carry out parameter optimization, using increase attributive classification device modules A after face characteristic extraction unit, predicting unit as to
Training CNN network, training step include:
Step 201, training is handled with image set by pretreatment unit, obtains facial image and its face is crucial
Point, and attribute classification mark is carried out to it, construct training dataset.
Step 202, it selects N number of training sample at random from training dataset, inputs CNN network to be trained.
Step 203, the output of attributive classification device modules A, attributive classification device module B is obtained by CNN network to be trained.
Step 204, loss function L is calculated, as shown in formula (9),
Wherein, n indicates training set sample size, yiIndicate the real property label of i-th of sample, φ={ g, l, c } table
Show Global Face feature, local facial feature and the set for merging face characteristic that the multi-region feature abstraction module obtains, k
For branch's number of the multi-region feature abstraction module, xiFor the final classification feature of i-th of sample,For i-th of sample
In branch ζkIn final classification feature (wherein ζ can be any one of for set φ={ g, l, c }),It is final
I-th of sample is divided into age yiProbability,To be i-th of sample in branch ζkIn be divided into age yiIt is general
Rate.In the loss function, what first item indicated is the softmax loss function of final classification device B, and what Section 2 indicated is all
The sum of the softmax loss of attributive classification device A in sub-network.
Step 205, whether training of judgement loss restrains, and terminates training if convergence, the face characteristic after being optimized mentions
Take unit, predicting unit;Operation that otherwise continue to the next step.
Step 206, network parameter gradient is calculated, using stochastic gradient descent method (Stochastic Gradient
Descent, SGD) algorithm update network parameter.
Step 207, return step 202.
It should be noted that the face character identifying system provided by the above embodiment based on face multizone, only more than
The division for stating each functional module carries out for example, in practical applications, can according to need and by above-mentioned function distribution by not
With functional module complete, i.e., by the embodiment of the present invention module or step decompose or combine again, for example, above-mentioned reality
The module for applying example can be merged into a module, can also be further split into multiple submodule, described above complete to complete
Portion or partial function.For module involved in the embodiment of the present invention, the title of step, it is only for distinguish modules
Or step, it is not intended as inappropriate limitation of the present invention.
The face character recognition methods based on face multizone of an embodiment of the present invention, based on above-mentioned based on face
The face character identifying system of multizone, method includes the following steps:
Step S10 is handled input picture using the pretreatment unit, obtains the first image and its corresponding people
Face key point;
Step S20 is based on the first image and corresponding face key point, by the face characteristic extraction unit, extracts
Global Face feature based on Global Face region, the local facial feature based on local facial region, based on global and local
The fusion face characteristic of human face region;
Step S30, will be described in the Global Face feature, the local facial feature, fusion face characteristic input
Predicting unit carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description
The specific work process of method and related explanation, can be with reference to the corresponding description in aforementioned system embodiment, and details are not described herein.
The storage device of an embodiment of the present invention, wherein being stored with a plurality of program, described program is suitable for being added by processor
It carries and executes to realize the above-mentioned face character recognition methods based on face multizone.
The processing unit of an embodiment of the present invention, including processor, storage device;Processor is adapted for carrying out each journey
Sequence;Storage device is suitable for storing a plurality of program;Described program is suitable for being loaded by processor and being executed above-mentioned based on people to realize
The face character recognition methods of face multizone.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description
The specific work process and related explanation of storage device, processing unit, can refer to corresponding processes in the foregoing method embodiment,
Details are not described herein.
Those skilled in the art should be able to recognize that, mould described in conjunction with the examples disclosed in the embodiments of the present disclosure
Block, method and step, can be realized with electronic hardware, computer software, or a combination of the two, software module, method and step pair
The program answered can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electric erasable and can compile
Any other form of storage well known in journey ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field is situated between
In matter.In order to clearly demonstrate the interchangeability of electronic hardware and software, in the above description according to function generally
Describe each exemplary composition and step.These functions are executed actually with electronic hardware or software mode, depend on technology
The specific application and design constraint of scheme.Those skilled in the art can carry out using distinct methods each specific application
Realize described function, but such implementation should not be considered as beyond the scope of the present invention.
Term " first ", " second " etc. are to be used to distinguish similar objects, rather than be used to describe or indicate specific suitable
Sequence or precedence.
Term " includes " or any other like term are intended to cover non-exclusive inclusion, so that including a system
Process, method, article or equipment/device of column element not only includes those elements, but also including being not explicitly listed
Other elements, or further include the intrinsic element of these process, method, article or equipment/devices.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field
Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this
Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these
Technical solution after change or replacement will fall within the scope of protection of the present invention.
Claims (10)
1. a kind of face character identifying system based on face multizone, which is characterized in that the system includes pretreatment unit, people
Face feature extraction unit, predicting unit;
The pretreatment unit is configured to carry out face in input picture face critical point detection, and the face based on acquisition
Key point position carries out the alignment and cutting of facial image, obtains the first image and its corresponding face key point;
The face characteristic extraction unit is configured to the first image and corresponding face key point, extracts based on global people
The Global Face feature in face region, the local facial feature based on local facial region, based on global and local human face region
Merge face characteristic;
The predicting unit, be configured to by the Global Face feature, the local facial feature, the fusion face characteristic into
Row splicing obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
2. the face character identifying system according to claim 1 based on face multizone, which is characterized in that the face
Feature extraction unit includes low-level feature abstraction module, multizone alignment feature generation module, multi-region feature abstraction module, spy
Levy Fusion Module;
The low-level feature abstraction module includes one or more convolutional layers, which is configured to the first image contract low layer
Neural network characteristics;
The multizone alignment feature generation module includes multiple aligned region ponds layer, which is configured to the low layer
The neural network characteristics for the low layer that feature extraction module extracts obtain Global Face region, local facial region;
The multi-region feature abstraction module includes Global Face feature extraction branch, local facial feature's extracting branch, fusion
Face characteristic extracting branch;The Global Face feature extraction branch configuration is that the Global Face extracted region based on acquisition is global
Face characteristic;Local facial feature's extracting branch is configured to the local facial region obtained and extracts local facial spy
Sign, the fusion face characteristic extracting branch are configured to Global Face feature extraction branch, local facial feature's extracting branch
Respective subnet network intermediate features carry out splicing and obtain fusion face characteristic;
The Fusion Features module, is configured to shot and long term memory network, to each in the multi-region feature abstraction module
Multiple features that a branch obtains distinguish Fusion Features, obtain fused Global Face feature, local facial feature, fusion people
Face feature.
3. the face character identifying system according to claim 2 based on face multizone, which is characterized in that the feature
Feature fusion in Fusion Module are as follows:
The hidden state of shot and long term memory network in respective branches is obtained, and obtained hidden state is summed.
4. the face character identifying system according to claim 1 based on face multizone, which is characterized in that the prediction
Unit includes merging features module, attribute forecast module;
The merging features module, be configured to by the face characteristic extraction unit export fused Global Face feature,
Local facial feature, fusion face characteristic splice, and obtain final classification feature;
The attribute forecast module, configuration carry out Attribute Recognition to the final classification feature based on specific attributive classification device B.
5. the face character identifying system according to claim 4 based on face multizone, which is characterized in that the attribute
Classifier B is character classification by age device, is by the classifier prediction result
y′i=∑jj·pj(xi)
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xiBelong to corresponding
The probability of j-th of classification of property.
6. the face character identifying system according to claim 4 based on face multizone, which is characterized in that the attribute
Classifier B is gender sorter or expression classifier, is by the classifier prediction result
Wherein, y 'iFor the attributive classification of i-th of sample of prediction, pj(xi) it is i-th of sample final classification feature xiBelong to corresponding
The probability of j-th of classification of property.
7. the face character identifying system according to claim 1-6 based on face multizone, which is characterized in that
The face characteristic extraction unit, the predicting unit carry out parameter optimization by training, special in the face in training process
It levies and is had additional and B pairs of attributive classification device between the multi-region feature abstraction module and the Fusion Features module of extraction unit
The attributive classification device A answered, the class of the setting attribute of each face characteristic for obtaining the multi-region feature abstraction module output
Other probability.
8. the face character identifying system according to claim 7 based on face multizone, which is characterized in that the face
Feature extraction unit, predicting unit training are with loss function L
Wherein, n indicates training set sample size, yiThe real property label of i-th of sample is indicated, described in φ={ g, l, c } expression
Global Face feature, local facial feature and the set for merging face characteristic that multi-region feature abstraction module obtains, k is described
Branch's number of multi-region feature abstraction module, xiFor the final classification feature of i-th of sample,It is i-th of sample in branch
ζkIn final classification feature,For i-th of sample is finally divided into age yiProbability,To be i-th
Sample is in branch ζkIn be divided into age yiProbability.
9. the face character identifying system according to claim 8 based on face multizone, which is characterized in that the face
Network parameter is updated using stochastic gradient descent method algorithm in feature extraction unit, predicting unit training.
10. a kind of face character recognition methods based on face multizone, which is characterized in that based on any one of claim 1-9
The face character identifying system based on face multizone, method includes the following steps:
Step S10 is handled input picture using the pretreatment unit, obtains the first image and its corresponding face closes
Key point;
Step S20 is based on the first image and corresponding face key point, and by the face characteristic extraction unit, extraction is based on
The Global Face feature in Global Face region, is based on global and local face at the local facial feature based on local facial region
The fusion face characteristic in region;
The Global Face feature, the local facial feature, the fusion face characteristic are inputted the prediction by step S30
Unit carries out splicing and obtains final characteristic of division, and carries out Attribute Recognition based on specific attributive classification device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910210915.8A CN110069994B (en) | 2019-03-18 | 2019-03-18 | Face attribute recognition system and method based on face multiple regions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910210915.8A CN110069994B (en) | 2019-03-18 | 2019-03-18 | Face attribute recognition system and method based on face multiple regions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110069994A true CN110069994A (en) | 2019-07-30 |
CN110069994B CN110069994B (en) | 2021-03-23 |
Family
ID=67366387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910210915.8A Active CN110069994B (en) | 2019-03-18 | 2019-03-18 | Face attribute recognition system and method based on face multiple regions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110069994B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443323A (en) * | 2019-08-19 | 2019-11-12 | 电子科技大学 | Appearance appraisal procedure based on shot and long term memory network and face key point |
CN110532965A (en) * | 2019-08-30 | 2019-12-03 | 京东方科技集团股份有限公司 | Age recognition methods, storage medium and electronic equipment |
CN110569779A (en) * | 2019-08-28 | 2019-12-13 | 西北工业大学 | Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning |
CN110738102A (en) * | 2019-09-04 | 2020-01-31 | 暗物质(香港)智能科技有限公司 | face recognition method and system |
CN111191569A (en) * | 2019-12-26 | 2020-05-22 | 深圳市优必选科技股份有限公司 | Face attribute recognition method and related device thereof |
CN111339827A (en) * | 2020-01-18 | 2020-06-26 | 中国海洋大学 | SAR image change detection method based on multi-region convolutional neural network |
CN111814567A (en) * | 2020-06-11 | 2020-10-23 | 上海果通通信科技股份有限公司 | Method, device and equipment for detecting living human face and storage medium |
CN112364827A (en) * | 2020-11-30 | 2021-02-12 | 腾讯科技(深圳)有限公司 | Face recognition method and device, computer equipment and storage medium |
CN112651301A (en) * | 2020-12-08 | 2021-04-13 | 浙江工业大学 | Expression recognition method integrating global and local features of human face |
WO2021127841A1 (en) * | 2019-12-23 | 2021-07-01 | 深圳市欢太科技有限公司 | Property identification method and apparatus, storage medium, and electronic device |
CN113486867A (en) * | 2021-09-07 | 2021-10-08 | 北京世纪好未来教育科技有限公司 | Face micro-expression recognition method and device, electronic equipment and storage medium |
CN113536845A (en) * | 2020-04-16 | 2021-10-22 | 深圳市优必选科技股份有限公司 | Face attribute recognition method and device, storage medium and intelligent equipment |
CN113628183A (en) * | 2021-08-06 | 2021-11-09 | 青岛海信医疗设备股份有限公司 | Volume determination method for ultrasonic detection object and ultrasonic equipment |
CN113963417A (en) * | 2021-11-08 | 2022-01-21 | 盛视科技股份有限公司 | Face attribute recognition method, terminal and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404877A (en) * | 2015-12-08 | 2016-03-16 | 商汤集团有限公司 | Human face attribute prediction method and apparatus based on deep study and multi-task study |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106529402A (en) * | 2016-09-27 | 2017-03-22 | 中国科学院自动化研究所 | Multi-task learning convolutional neural network-based face attribute analysis method |
CN107729835A (en) * | 2017-10-10 | 2018-02-23 | 浙江大学 | A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
CN108229296A (en) * | 2017-09-30 | 2018-06-29 | 深圳市商汤科技有限公司 | The recognition methods of face skin attribute and device, electronic equipment, storage medium |
CN108268814A (en) * | 2016-12-30 | 2018-07-10 | 广东精点数据科技股份有限公司 | A kind of face identification method and device based on the fusion of global and local feature Fuzzy |
CN108596011A (en) * | 2017-12-29 | 2018-09-28 | 中国电子科技集团公司信息科学研究院 | A kind of face character recognition methods and device based on combined depth network |
CN108665506A (en) * | 2018-05-10 | 2018-10-16 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer storage media and server |
CN108921042A (en) * | 2018-06-06 | 2018-11-30 | 四川大学 | A kind of face sequence expression recognition method based on deep learning |
CN109002755A (en) * | 2018-06-04 | 2018-12-14 | 西北大学 | Age estimation model building method and estimation method based on facial image |
CN109190514A (en) * | 2018-08-14 | 2019-01-11 | 电子科技大学 | Face character recognition methods and system based on two-way shot and long term memory network |
US20190034709A1 (en) * | 2017-07-25 | 2019-01-31 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method and apparatus for expression recognition |
CN109344693A (en) * | 2018-08-13 | 2019-02-15 | 华南理工大学 | A kind of face multizone fusion expression recognition method based on deep learning |
-
2019
- 2019-03-18 CN CN201910210915.8A patent/CN110069994B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404877A (en) * | 2015-12-08 | 2016-03-16 | 商汤集团有限公司 | Human face attribute prediction method and apparatus based on deep study and multi-task study |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106529402A (en) * | 2016-09-27 | 2017-03-22 | 中国科学院自动化研究所 | Multi-task learning convolutional neural network-based face attribute analysis method |
CN108268814A (en) * | 2016-12-30 | 2018-07-10 | 广东精点数据科技股份有限公司 | A kind of face identification method and device based on the fusion of global and local feature Fuzzy |
US20190034709A1 (en) * | 2017-07-25 | 2019-01-31 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method and apparatus for expression recognition |
CN108229296A (en) * | 2017-09-30 | 2018-06-29 | 深圳市商汤科技有限公司 | The recognition methods of face skin attribute and device, electronic equipment, storage medium |
CN107729835A (en) * | 2017-10-10 | 2018-02-23 | 浙江大学 | A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
CN108596011A (en) * | 2017-12-29 | 2018-09-28 | 中国电子科技集团公司信息科学研究院 | A kind of face character recognition methods and device based on combined depth network |
CN108665506A (en) * | 2018-05-10 | 2018-10-16 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer storage media and server |
CN109002755A (en) * | 2018-06-04 | 2018-12-14 | 西北大学 | Age estimation model building method and estimation method based on facial image |
CN108921042A (en) * | 2018-06-06 | 2018-11-30 | 四川大学 | A kind of face sequence expression recognition method based on deep learning |
CN109344693A (en) * | 2018-08-13 | 2019-02-15 | 华南理工大学 | A kind of face multizone fusion expression recognition method based on deep learning |
CN109190514A (en) * | 2018-08-14 | 2019-01-11 | 电子科技大学 | Face character recognition methods and system based on two-way shot and long term memory network |
Non-Patent Citations (3)
Title |
---|
HU HAN等: "Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
康运锋等: "人像属性识别关键技术研究进展及应用探索", 《警察技术》 * |
李江: "基于深度学习的人脸表情识别研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443323A (en) * | 2019-08-19 | 2019-11-12 | 电子科技大学 | Appearance appraisal procedure based on shot and long term memory network and face key point |
CN110569779A (en) * | 2019-08-28 | 2019-12-13 | 西北工业大学 | Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning |
CN110569779B (en) * | 2019-08-28 | 2022-10-04 | 西北工业大学 | Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning |
US11361587B2 (en) | 2019-08-30 | 2022-06-14 | Boe Technology Group Co., Ltd. | Age recognition method, storage medium and electronic device |
CN110532965A (en) * | 2019-08-30 | 2019-12-03 | 京东方科技集团股份有限公司 | Age recognition methods, storage medium and electronic equipment |
CN110532965B (en) * | 2019-08-30 | 2022-07-26 | 京东方科技集团股份有限公司 | Age identification method, storage medium and electronic device |
CN110738102A (en) * | 2019-09-04 | 2020-01-31 | 暗物质(香港)智能科技有限公司 | face recognition method and system |
CN110738102B (en) * | 2019-09-04 | 2023-05-12 | 暗物智能科技(广州)有限公司 | Facial expression recognition method and system |
WO2021127841A1 (en) * | 2019-12-23 | 2021-07-01 | 深圳市欢太科技有限公司 | Property identification method and apparatus, storage medium, and electronic device |
CN111191569A (en) * | 2019-12-26 | 2020-05-22 | 深圳市优必选科技股份有限公司 | Face attribute recognition method and related device thereof |
CN111339827A (en) * | 2020-01-18 | 2020-06-26 | 中国海洋大学 | SAR image change detection method based on multi-region convolutional neural network |
CN113536845A (en) * | 2020-04-16 | 2021-10-22 | 深圳市优必选科技股份有限公司 | Face attribute recognition method and device, storage medium and intelligent equipment |
CN113536845B (en) * | 2020-04-16 | 2023-12-01 | 深圳市优必选科技股份有限公司 | Face attribute identification method and device, storage medium and intelligent equipment |
CN111814567A (en) * | 2020-06-11 | 2020-10-23 | 上海果通通信科技股份有限公司 | Method, device and equipment for detecting living human face and storage medium |
CN112364827A (en) * | 2020-11-30 | 2021-02-12 | 腾讯科技(深圳)有限公司 | Face recognition method and device, computer equipment and storage medium |
CN112364827B (en) * | 2020-11-30 | 2023-11-10 | 腾讯科技(深圳)有限公司 | Face recognition method, device, computer equipment and storage medium |
CN112651301A (en) * | 2020-12-08 | 2021-04-13 | 浙江工业大学 | Expression recognition method integrating global and local features of human face |
CN113628183A (en) * | 2021-08-06 | 2021-11-09 | 青岛海信医疗设备股份有限公司 | Volume determination method for ultrasonic detection object and ultrasonic equipment |
CN113486867B (en) * | 2021-09-07 | 2021-12-14 | 北京世纪好未来教育科技有限公司 | Face micro-expression recognition method and device, electronic equipment and storage medium |
CN113486867A (en) * | 2021-09-07 | 2021-10-08 | 北京世纪好未来教育科技有限公司 | Face micro-expression recognition method and device, electronic equipment and storage medium |
CN113963417A (en) * | 2021-11-08 | 2022-01-21 | 盛视科技股份有限公司 | Face attribute recognition method, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110069994B (en) | 2021-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069994A (en) | Face character identifying system, method based on face multizone | |
Jain et al. | Hybrid deep neural networks for face emotion recognition | |
Tasar et al. | ColorMapGAN: Unsupervised domain adaptation for semantic segmentation using color mapping generative adversarial networks | |
Li et al. | Person search with natural language description | |
CN109033938A (en) | A kind of face identification method based on ga s safety degree Fusion Features | |
CN109614921B (en) | Cell segmentation method based on semi-supervised learning of confrontation generation network | |
CN109934293A (en) | Image-recognizing method, device, medium and obscure perception convolutional neural networks | |
CN112949535B (en) | Face data identity de-identification method based on generative confrontation network | |
CN109344285A (en) | A kind of video map construction and method for digging, equipment towards monitoring | |
CN106909905A (en) | A kind of multi-modal face identification method based on deep learning | |
CN111832511A (en) | Unsupervised pedestrian re-identification method for enhancing sample data | |
CN110147699B (en) | Image recognition method and device and related equipment | |
CN112819065B (en) | Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information | |
CN107735795A (en) | Method and system for social relationships identification | |
CN109033107A (en) | Image search method and device, computer equipment and storage medium | |
Verma et al. | Unsupervised domain adaptation for person re-identification via individual-preserving and environmental-switching cyclic generation | |
CN110457984A (en) | Pedestrian's attribute recognition approach under monitoring scene based on ResNet-50 | |
CN106803084B (en) | Facial feature point positioning method based on end-to-end circulation network | |
CN109886356A (en) | A kind of target tracking method based on three branch's neural networks | |
CN109784196A (en) | Visual information, which is sentenced, knows method, apparatus, equipment and storage medium | |
CN110263822A (en) | A kind of Image emotional semantic analysis method based on multi-task learning mode | |
Liu et al. | Compact feature learning for multi-domain image classification | |
CN110245611A (en) | Image-recognizing method, device, computer equipment and storage medium | |
CN108073851A (en) | A kind of method, apparatus and electronic equipment for capturing gesture identification | |
CN110472495A (en) | A kind of deep learning face identification method based on graphical inference global characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |