CN107066943B - A kind of method for detecting human face and device - Google Patents
A kind of method for detecting human face and device Download PDFInfo
- Publication number
- CN107066943B CN107066943B CN201710127367.3A CN201710127367A CN107066943B CN 107066943 B CN107066943 B CN 107066943B CN 201710127367 A CN201710127367 A CN 201710127367A CN 107066943 B CN107066943 B CN 107066943B
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- candidate
- masked
- characteristics dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for detecting human face and devices.The method comprise the steps that 1) detect candidate face from image to be processed, and extract the candidate feature of the candidate face;2) each candidate feature is subjected to projective transformation in the conventional external feature space or proximate exterior feature space constructed in advance, obtains corresponding tradition or approximate insertion feature;Wherein, which is the dictionary for selecting representative feature to form from reference face characteristics dictionary and non-face characteristics dictionary;3) the insertion feature is verified, determines whether the corresponding candidate face of the insertion feature is face.Human face detection device of the invention includes candidate block, insertion module and authentication module.The present invention can obtain the higher Face datection performance of precision;To having under circumstance of occlusion, also has good Face datection ability.
Description
Technical field
The invention belongs to computer vision and deep learning field more particularly to a kind of face inspections under obstruction conditions
Survey method and device.
Background technique
Human face detection tech can be applied to camera auto-focusing, human-computer interaction, photo management, city safety monitoring, intelligence
The numerous areas such as driving.Currently, generally existing due to what is blocked in practical application of Face datection under the conditions of open environment
(in the case of the crowd is dense), Face datection performance is by serious challenge, thus the Face datection performance under obstruction conditions is asked
Topic has to be solved.In addition, the Face datection studied under masked obstruction conditions has important practical significance, and such as: video monitoring
In for finding a suspect to provide warning, pass through regularity of distribution progress weather conditions prediction for detecting masked face etc..
Traditional method for detecting human face meets with serious performance decline under occlusion, and reason is in detection process that be blocked portion
The face clue divided is invalid, to cause inevitably to introduce noise in characteristic extraction procedure.In short, it is imperfect and
The problem of feature of inaccuracy makes the masked Face datection blocked become a great challenge.
In recent years, certain methods be also studied in this field, the prior art is first to detect face candidate, then again
To face candidate class validation.One of method obtains the response of the multiple components of face by the multiple neural networks of training to examine
Survey face candidate, then retraining one new neural network carry out face candidate class validation (referring to S.Yang,
P.Luo,C.C.Loy,and X.Tang.From facial parts responses to face detection:A deep
learning approach.In:IEEE ICCV,2015).Another method is then relatively damaged by selected section feature to calculate
It loses to confirm face candidate (referring to M.Opitz, G.Waltner, G.Poier, H.Possegger, and
H.Bischo.Grid loss:Detecting occluded faces.In ECCV, 2016), this method can preferably be located in
Manage the Face datection problem of partial occlusion situation.The above method alleviates to a certain extent seriously blocks (such as masked to block) feelings
Face datection problem under condition, but still fail to be fully solved.When face component is blocked, examined by multiple unit responses
The method for detecting human face for surveying face candidate, can introduce noise or mistake in the component of occlusion area, so as to lead to face point
Class confirms mistake;When serious shielding, the face inspection for calculating loss confirmation face candidate is compared by selected section feature
Survey method, the loss error being calculated is larger, so as to cause Face datection failure.
Summary of the invention
In order to overcome the deficiencies of the prior art, the present invention provides a kind of method for detecting human face and device, this method passes through volume
Product neural network detection candidate face and extraction higher-dimension depth characteristic (i.e. candidate feature), then by being locally linear embedding into progress
Projection Character is not exclusively and inaccurate to eliminate masked bring feature of blocking, then (i.e. using multitask convolutional neural networks
CNN-V candidate face) is verified, to obtain more accurate Face datection performance.Meanwhile the invention also provides a kind of approximations
The building method in surface space, by finding most like reference face and the maximum ginseng of difference from external database
Examine it is non-face, carry out proximate exterior feature space construction, insertion transformation is carried out to candidate feature using proximate exterior feature space,
To correct candidate feature.The present invention is achieved through the following technical solutions.
A kind of method for detecting human face of the invention, step include:
1) candidate face detection is carried out to image to be detected, obtains candidate face image;
2) candidate feature extraction is carried out to the candidate face image, obtains candidate feature;
3) insertion transformation is carried out to the candidate feature, obtains tradition insertion feature or approximate insertion feature, the insertion
Feature can restore face clue and occlusion removal bring noise;
4) it to the tradition insertion feature or approximate insertion feature, is verified, is examined with regression algorithm by classifying
Survey result.
Further, it after candidate feature carries out insertion transformation by a surface space built in advance, obtains
Tradition insertion feature or approximate insertion feature;Surface space is conventional external feature space or proximate exterior feature space.
Further, insertion transformation is locally linear embedding into method or quick approximation is locally linear embedding into method using traditional
It realizes;Traditional method that is locally linear embedding into carries out insertion change to the candidate feature with noise using conventional external feature space
It changes, obtains tradition insertion feature;Quick approximation is locally linear embedding into special to the candidate with noise using proximate exterior feature space
Sign carries out insertion transformation, obtains approximate insertion feature.
Further, quick approximation is locally linear embedding into the building method of proximate exterior feature space in method, including with
Lower step:
A) candidate face detection is carried out to the reference face data set marked and candidate feature is extracted, judge candidate feature
Belong to face characteristic or non-face feature, these candidate features is stored in respectively with reference to face characteristics dictionary and with reference to non-face
Characteristics dictionary;
B) candidate face detection is carried out to the masked face data set marked and candidate feature is extracted, judge candidate feature
Belong to masked face characteristic or masked non-face feature, these candidate features are stored in masked face characteristics dictionary and illiteracy respectively
The non-face characteristics dictionary in face;
C) from above-mentioned with reference to selecting representative to represent above-mentioned masked face tagged word in face characteristics dictionary
The reference face characteristics dictionary of allusion quotation;
D) from above-mentioned with reference to selecting representative to represent above-mentioned masked non-face spy in non-face characteristics dictionary
Levy the non-face characteristics dictionary of reference of dictionary;
E) merge above-mentioned representative with reference to face characteristics dictionary and representative with reference to non-face tagged word
Allusion quotation obtains proximate exterior feature space.
Further, in step a), pass through the face for calculating the corresponding candidate face position of the candidate feature Yu having marked
Degree of overlapping between position determines that degree of overlapping is handed over and ratio is to measure, wherein hand over simultaneously than be greater than 0.7 judge it is candidate special
Sign is the feature with reference to face, hands over and refers to non-face feature than judging candidate feature less than 0.3.
Further, in step b), pass through the face for calculating the corresponding candidate face position of the candidate feature Yu having marked
Degree of overlapping between position determines that degree of overlapping is handed over and ratio is to measure, wherein hand over simultaneously than be greater than 0.6 judge it is candidate special
Sign is the feature of masked face, is handed over and than judging candidate feature for masked non-face feature less than 0.4.
Further, representative reference is selected from reference face characteristics dictionary using greedy algorithm in step c)
Face characteristic dictionary;The greedy algorithm refers to that calculating with reference to each loss with reference to face characteristic in face characteristics dictionary, obtains
To by the reference face feature list for losing ascending ascending order arrangement, the reference face characteristic of the list foremost is taken to represent
Masked face characteristic;Wherein the loss refers to each arest neighbors feature with reference to face characteristic and masked face characteristics dictionary
Distance and each difference with reference to face characteristic and at a distance from the arest neighbors feature of masked non-face characteristics dictionary.
Further, representative ginseng is selected from the non-face characteristics dictionary of reference using greedy algorithm in step d)
Examine non-face characteristics dictionary;The greedy algorithm, which refers to, to be calculated with reference to each with reference to non-face feature in non-face characteristics dictionary
Loss obtains taking the reference of the list foremost inhuman by the non-face feature list of reference for losing ascending ascending order arrangement
Face feature represents masked non-face feature;Wherein the loss refers to the non-face feature of each reference and masked non-face feature
The distance of the arest neighbors feature of dictionary and each arest neighbors feature with reference to non-face feature and masked face characteristics dictionary away from
It is poor from it.
The invention further relates to a kind of human face detection devices, including candidate block, insertion module and authentication module.Candidate block
For carrying out candidate face detection to image to be detected and extracting candidate feature;Insertion module is for being embedded in candidate feature
Transformation, face clue and occlusion removal bring can be restored by obtaining tradition insertion feature or approximate insertion feature, insertion feature
Noise;Authentication module is used to verify tradition insertion feature or approximate insertion feature with regression algorithm by classifying, with
Testing result to the end.Candidate block obtains multiple candidate features, is then built in advance in insertion module by one
Surface space carry out insertion transformation after, obtain tradition insertion feature or approximate insertion feature;Surface space is to pass
System surface space or proximate exterior feature space;Insertion transformation is locally linear embedding into method or quickly approximation using traditional
It is locally linear embedding into method realization.
The beneficial effects of the present invention are:
For the Face datection problem under obstruction conditions, Face datection problem under especially serious masked obstruction conditions,
Detection method and device of the invention have relatively good performance;To the face in unobstructed situation, face inspection of the invention
It surveys method and device and also has good processing capacity.
Detailed description of the invention
Fig. 1 is a kind of flow chart of method for detecting human face of the present invention;
Fig. 2 is apparatus of the present invention candidate block flow diagram;
Fig. 3 is that apparatus of the present invention are embedded in block process schematic diagram;
Fig. 4 is apparatus of the present invention authentication module flow diagram;
Fig. 5 is that proximate exterior feature space of the invention constructs flow diagram.
Specific embodiment
To be clearer and more comprehensible above scheme and beneficial effect of the invention, hereafter by embodiment, and attached drawing is cooperated to make
Detailed description are as follows.
The present invention provides a kind of method for detecting human face and device, which includes candidate block, insertion module and verifying mould
Block;The flow chart of this method is as shown in Figure 1, its step includes:
1) image is received.Described image is either under facial image under obstruction conditions or serious masked obstruction conditions
Facial image, the facial image being also possible in unobstructed situation is also possible to the image not comprising face.
2) candidate face is detected by candidate block and extracts the higher-dimension depth characteristic of candidate face, i.e. candidate feature.
In candidate block, candidate face detection is first carried out, then judges whether to detect candidate face, if do not detected
Then terminate to candidate face;Candidate feature extraction is carried out if detecting candidate face, obtains candidate feature.
Referring to FIG. 2, the candidate block mainly includes two convolutional neural networks: one is small convolutional neural networks
(referred to as candidate convolutional neural networks, abbreviation CNN-P), the network is for realizing candidate face detection;Another big convolution
Neural network (referred to as feature convolutional neural networks, abbreviation CNN-F), for realizing candidate feature extraction.Firstly, the figure received
As carrying out candidate face detection, then judging whether to detect candidate face, if do not detected by candidate convolutional neural networks
To candidate face, then terminate;If detecting candidate face, candidate face normalized is first carried out, then roll up by feature
Product neural network carries out candidate feature extraction, obtains candidate feature.
3) by insertion module carry out candidate feature insertion, obtain being embedded in transformed feature, i.e., tradition insertion feature or
Approximation insertion feature (is referred to as insertion feature).
Since masked block will cause face clue missing and characteristic noise, result in a feature that imperfect and inaccurate.
For this problem, the insertion module in technical solution of the present invention, which is realized, restores face clue from candidate feature and removes noise.
Be embedded in resume module the advantages of be obtain insertion feature can characterize well it is masked block face, so as to promoted detection
Precision.
Referring to FIG. 3, candidate feature is carried out by a surface space built in advance in insertion module
After insertion transformation, tradition insertion feature or approximate insertion feature are obtained.The insertion transformation mainly uses LLE (Local
Linear Embedding) method realization.LLE is a kind of dimension reduction method for nonlinear data, treated low-dimensional data
Be able to maintain original topological relation, have been widely used for image data classification and cluster, multidimensional data visualization with
And the fields such as bioinformatics.The present invention realizes insertion transformation using traditional LLE method and quick approximation LLE method.
4) by authentication module, tradition insertion feature or approximate insertion signature verification are carried out, judges that each tradition insertion is special
Whether sign or the corresponding candidate face of approximate insertion feature belong to real face, if tradition insertion feature or approximate insertion are special
It levies corresponding candidate face and belongs to real face, then record face information;If the tradition is embedded in feature or approximate insertion feature
Corresponding candidate face is not belonging to real face, then terminates.
Referring to FIG. 4, full connection convolutional neural networks (referred to as verifying convolutional Neural net of the authentication module by one four layers
Network, abbreviation CNN-V) composition, for carrying out signature verification, that is, to differentiate that tradition insertion feature or approximate insertion feature are corresponding
Whether candidate face belongs to real face and corrects corresponding candidate face position and scale.If being not belonging to real face,
Ignore tradition insertion feature or the corresponding candidate face of approximate insertion feature;It is if belonging to real face, the tradition is embedding
Enter feature or the corresponding revised candidate face position of approximate insertion feature and scale is added in testing result.
Tradition insertion feature or approximate insertion feature are classified and returned by authentication module, to determine candidate
Belong to real face or non-face, and face location and scale are modified, to obtain the higher Face datection of precision
Performance.
Therefore, a kind of method for detecting human face and device proposed by the present invention have combined the candidate convolution nerve net of candidate block
Network CNN-P, candidate block feature convolutional neural networks CNN-F, be embedded in module and authentication module verifying convolutional neural networks
CNN-V, to reach the purpose of the present invention.
The insertion that insertion module is detailed below converts used method.
1, traditional LLE method.
Referring to FIG. 3, by traditional LLE method, by the masked candidate feature x blockediIn the tradition constructed in advance
Projective transformation is carried out in surface space, obtains insertion feature vi, insertion feature viIt can effectively eliminate and blocked due to masked
The imperfect and inaccurate problem of bring feature has to resist well and blocks ability.Wherein xiSubscript i for mark it is different
Candidate feature;viSubscript i for marking different insertion features.It is embedded in feature viReferred to as tradition insertion feature.
The conventional external feature space is formed by reference face characteristic and with reference to non-face feature, is expressed as tagged word
The form of allusion quotation, i.e. D=[D + ,D-], D here+It is with reference to face characteristics dictionary, D-It is with reference to non-face characteristics dictionary, usually
D+And D-Scale has up to a million.
It is described with reference to face characteristic and refer to non-face feature, by building with reference to candidate characteristic set realize.Specifically, right
The unobstructed reference face data set S of the large size markedn, candidate face detection is carried out using candidate block and candidate feature mentions
It takes.Judge that candidate feature belongs to face characteristic or non-face feature, these candidate features are divided into reference to face characteristic and ginseng
Non-face feature is examined, deposit refers to face characteristics dictionary D respectively+With the non-face characteristics dictionary D of reference-.Wherein judge candidate feature
Belong to face characteristic or non-face feature, is the people by calculating the corresponding candidate face position of the candidate feature Yu having marked
Degree of overlapping between face position determines that degree of overlapping hands over and measure than (Intersection-over-Union, IoU).
It is handed in generally conventional method and is judged as face than being greater than 0.5, be judged as less than 0.5 non-face.With conventional method phase
Than, friendship and ratio are greater than 0.7 and are judged as handing over and more non-face than being judged as referring to less than 0.3 with reference to face in the present invention, so that
The reference face and reference arrived is non-face to have better distinction, it is ensured that has with reference to candidate feature and preferably recognizes energy
Power.
For each candidate feature x with noisei, all from D+And D-Middle selection distance xiClosest feature set is constituted
The sub- dictionary D of featurei(DiSubscript i for marking the sub- dictionary of the corresponding feature of different candidate features), then utilize LLE algorithm
Projective transformation is carried out, obtaining a new feature representation is tradition insertion feature vi, the solution formula of the process is as follows:
Meet vi≥0(1)
2, quick approximation LLE method.
The present invention proposes a kind of quick approximation LLE method, for each candidate feature x with noisei, using quickly close
Projective transformation is carried out like LLE method, obtains an approximate insertion featureThis method solution formula is as follows:
Meet
In above-mentioned formula (2),It is proximate exterior feature space, which is from reference face characteristics dictionary D+With it is inhuman
Face characteristics dictionary D-The dictionary of the representative feature composition of middle selection.To each candidate feature xiIt is right no longer to need to construct its
The sub- dictionary D of the feature answeredi, each candidate feature xiAll using fixed proximate exterior feature spaceProjective transformation is carried out, is obtained
Approximation insertion feature
The construction of proximate exterior feature space in quick approximation LLE method is detailed below.
The building method of the proximate exterior feature space is by finding most like reference from external database
Face or difference are maximum with reference to non-face, progress proximate exterior feature space construction.
Referring to FIG. 5, the figure is proximate exterior feature spaceThe flow chart of construction,It is from D+And D-Middle selection is most
Representative feature composition comprising representative reference face characteristics dictionaryWith representative with reference to inhuman
Face characteristics dictionaryIt is expressed asProximate exterior feature space proposed by the present inventionBuilding method is specifically divided into
The following steps:
1) building with reference to face and refers to non-face characteristics dictionary: it is identical with above-mentioned traditional LLE method, to mark
The unobstructed reference face data set S of good large sizen, candidate face detection is carried out using candidate block and candidate feature is extracted.
Belong to face characteristic or non-face feature according to candidate feature, these candidate features are stored in respectively with reference to face characteristics dictionary
D+With the non-face characteristics dictionary D of reference-.Judge that candidate feature belongs to face characteristic or non-face feature, is by calculating the time
The degree of overlapping between the corresponding candidate face position of feature and the face location marked is selected to determine, degree of overlapping is handed over and compared
IoU is measured.It is handed in generally conventional method and is judged as face than being greater than 0.5, be judged as less than 0.5 non-face.With
Conventional method is compared, and is handed in the present invention and is judged as handing over reference to face and ratio being judged as referring to less than 0.3 than being greater than 0.7
Non-face, the reference face and reference made is non-face to have better distinction, it is ensured that has with reference to candidate feature
Better identification capability.
2) masked face and masked non-face characteristics dictionary: similar above-mentioned steps 1 are constructed), it is masked to the large size marked
Human face data collection Sm, candidate face detection is carried out using candidate block and candidate feature is extracted.Belonged to according to candidate feature masked
These candidate features are divided into masked face characteristics dictionary by face characteristic or masked non-face featureWith masked non-face spy
Levy dictionarySince the positioning accuracy of masked Face datection will be generally less than unobstructed Face datection, hands over and compare in the present invention
It is judged as masked face greater than 0.6, hands over and masked more non-face than being judged as less than 0.4, to select the masked of better quality
Face candidate feature.
3) representative reference face characteristics dictionary is selected From reference face characteristics dictionary D+Middle selection is
D+A subset be Representativeness show it when representing masked face with good characterization ability simultaneously exist
There is separating capacity when representing masked non-face.To,Sparsely representing masked face characteristics dictionaryShi Yingyou is minimum
Mistake, while sparsely representing masked non-face characteristics dictionaryThere should be maximum mistake.Therefore,Under solving
Column formula (3) obtains:
Meet
Above-mentioned formula (3) belongs to sparse coding processing, α in formula1And α2It is to utilize respectivelyRepresent some masked face
Feature x1With some masked non-face feature x2The sparse coefficient vector needed.Only having an element in sparse coefficient vector is 1,
Other elements are 0.Using the constraint condition of sparse coefficient vector, sparse coding processing be equivalent to fromMiddle searching arest neighbors.By
InIn each feature from reference face characteristics dictionary D+, the optimization problem of formula (3) and classical sparse coding side
Formula is different, is difficult to be solved with classical optimization algorithm.So the present invention proposes a kind of greedy method effectively from reference
Face characteristic dictionary D+Middle buildingIn the greedy method of proposition, the present invention is calculated first with reference to face characteristics dictionary D+In
It is each to refer to face characteristicLossThe loss is expressed asWith masked face characteristics dictionaryArest neighbors feature
Distance andWith masked non-face characteristics dictionaryArest neighbors feature distance difference, it is real by following formula (4)
It is existing:
Meet
In above-mentioned formula (4), ρ1And ρ2It is two coefficients of balance, for the distance between balance characteristics, leads in actual treatment
It often takes 1 to accelerate to calculate, it is each to refer to face characteristicIt is rarely used for representingIn masked face characteristic andIn illiteracy
The non-face feature in face.It is lost by calculatingIt obtains according to the reference face for losing ascending ascending order arrangement
Feature list, the reference face characteristic that foremost is come in list is most strong in the ability for representing masked face characteristic aspect, and generation
The ability of the masked non-face feature of table is most weak.It in this way, can be by way of iteration, constantly by M before list
A feature pool P is added to reference to face characteristic+In, it constructs finalPreferably, M is more than or equal to 1 and is less than or equal to
50.Specifically, enable initial characteristics pond for sky i.e.Then it walks and uses in tCome M candidate before selecting, obtainThen,In feature for updatingThe objective function being subsequently used in solution formula (3).
4) it selects representative with reference to non-face characteristics dictionary From with reference to non-face characteristics dictionary D-Middle choosing
It selects, is D-A subset beRepresentativeness show it when representing masked non-face with good characterization energy
Power has separating capacity when representing masked face simultaneously.To,Sparsely representing masked non-face characteristics dictionaryWhen
There should be the smallest mistake, while sparsely represent masked face characteristics dictionaryThe maximum mistake of Shi Yingyou.Therefore,Energy
It is enough to be obtained by solving following equation (5):
Meet
Above-mentioned formula (5) belongs to sparse coding processing, α in formula1And α2It is to utilize respectivelyRepresent some masked face
Feature x1With some masked non-face feature x2The sparse coefficient vector needed.Only having an element in sparse coefficient vector is 1,
Other elements are 0.Using the constraint condition of sparse coefficient vector, sparse coding processing be equivalent to fromMiddle searching arest neighbors.By
InIn each feature from refer to non-face characteristics dictionary D-, the optimization problem and the sparse coding of classics of formula (5)
Mode is different, is difficult to be solved with classical optimization algorithm.So the present invention proposes a kind of greedy method effectively from ginseng
Examine non-face characteristics dictionary D-Middle buildingIn the greedy method of proposition, the present invention is calculated first with reference to non-face tagged word
Allusion quotation D-In it is each refer to non-face featureLossThe loss is expressed asWith masked non-face characteristics dictionary's
The distance of arest neighbors feature andWith masked face characteristics dictionaryArest neighbors feature distance difference, pass through following formula
(6) it realizes:Meet
In above-mentioned formula (6), ρ1And ρ2It is two coefficients of balance, for the distance between balance characteristics, leads in actual treatment
It often takes 1 to accelerate to calculate, it is each to refer to non-face featureIt is rarely used for representingIn masked face characteristic andIn
Masked non-face feature.It is lost by calculatingIt obtains non-according to the reference for losing ascending ascending order arrangement
Face characteristic list, the non-face feature of reference that foremost is come in list are representing the ability of masked non-face characteristic aspect most
By force, the ability for representing masked face characteristic is most weak.It in this way, can be by way of iteration, constantly by list
First M is added to a feature pool P with reference to non-face feature-In, it constructs finalPreferably, M is more than or equal to 1 and small
In equal to 50.Specifically, enable initial characteristics pond for sky i.e.Then it walks and uses in tCome M candidate before selecting,
It obtainsThen,In feature for updatingThe objective function being subsequently used in solution formula (5).
5) merge dictionary, obtain proximate exterior feature space
In above-mentioned steps, step 1) and 2) no stringent sequencing can carry out successively or parallel;Step 3) and 4) do not have
There is stringent sequencing, can carry out successively or parallel.Through the above steps, proximate exterior feature space is constructedThe proximate exterior feature space is to select most to have from a large amount of refer in face characteristic and the non-face feature of reference
Representative feature composition, selection strategy are by being compared with a large amount of masked face characteristics and masked non-face feature
It arrives, the feature for including can represent masked face characteristic well while can also distinguish masked non-face feature, therefore using closely
Like surface spaceCarrying out the insertion feature that insertion projection obtains to candidate feature has characterization well to masked face
Ability.On the other hand, compared with traditional LLE method, the proximate exterior of quick approximation LLE method construct proposed by the present invention is special
Levy spaceThan each candidate feature xiCorresponding local feature space DiIt is big, to each candidate feature xi, carry out projective transformation
Afterwards, the approximate insertion feature obtainedThe tradition insertion feature v obtained than traditional LLE methodiDimension wants high, to a certain extent
Quickly approximate bring characteristic present loss is compensated for, so the approximation of quick approximation LLE method construct proposed by the present invention is outer
Portion's feature spaceFor being had little effect to detection accuracy in masked Face datection.
It is corresponding representative with reference to facial image and representative by comparing proximate exterior feature space
With reference to the example of inhuman face image, it is found that the representative reference facial image of selection includes different appearances, pendant
It wears, the colour of skin, expression etc., therefore masked face can be represented well and distinguished well simultaneously masked non-face;The tool of selection
Representational with reference to inhuman face image is then texture region, imperfect face, the face containing more background, therefore can be fine
Ground represents masked non-face and distinguishes masked face well simultaneously.
It is above to implement to be merely illustrative of the technical solution of the present invention rather than be limited, the ordinary skill people of this field
Member can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the spirit and scope of the present invention, this hair
Bright protection scope should be subject to described in claims.
Claims (7)
1. a kind of method for detecting human face, step include:
1) candidate face detection is carried out to image to be detected, obtains candidate face image;
2) candidate feature extraction is carried out to the candidate face image, obtains candidate feature;
3) insertion transformation is carried out to the candidate feature, obtains tradition insertion feature or approximate insertion feature;
4) it to the tradition insertion feature or approximate insertion feature, is verified by classifying with regression algorithm, obtains detection knot
Fruit;
Wherein, in step 3), after the candidate feature carries out insertion transformation by a surface space built in advance,
Obtain tradition insertion feature or approximate insertion feature;The surface space is that conventional external feature space or proximate exterior are special
Insertion transformation described in sign space is locally linear embedding into method or quick approximation is locally linear embedding into method realization using traditional;It passes
The method that is locally linear embedding into of system carries out insertion transformation to the candidate feature with noise using conventional external feature space, is passed
System insertion feature;Quick approximation, which is locally linear embedding into, is embedded in the candidate feature with noise using proximate exterior feature space
Transformation obtains approximate insertion feature;
The quick approximation is locally linear embedding into the building method of proximate exterior feature space in method, comprising the following steps:
A) candidate face detection is carried out to the reference face data set marked and candidate feature is extracted, judge that candidate feature belongs to
These candidate features are stored in reference to face characteristics dictionary respectively and refer to non-face feature by face characteristic or non-face feature
Dictionary;
B) candidate face detection is carried out to the masked face data set marked and candidate feature is extracted, judge that candidate feature belongs to
These candidate features are stored in masked face characteristics dictionary and masked non-by masked face characteristic or masked non-face feature respectively
Face characteristic dictionary;
C) from above-mentioned with reference to selecting representative to represent above-mentioned masked face characteristics dictionary in face characteristics dictionary
With reference to face characteristics dictionary;
D) from above-mentioned with reference to selecting representative to represent above-mentioned masked non-face tagged word in non-face characteristics dictionary
The non-face characteristics dictionary of the reference of allusion quotation;
E) merging is above-mentioned representative with reference to face characteristics dictionary and representative with reference to non-face characteristics dictionary, obtains
To proximate exterior feature space.
2. the method as described in claim 1, which is characterized in that in step a), by calculating the corresponding candidate of the candidate feature
Degree of overlapping between face location and the face location marked determines that degree of overlapping is handed over and ratio is to measure, wherein friendship is simultaneously
Candidate feature is judged for the feature with reference to face than being greater than 0.7, is handed over and than judging candidate feature for reference to inhuman less than 0.3
The feature of face.
3. the method as described in claim 1, which is characterized in that step b), by calculating the corresponding candidate of the candidate feature
Degree of overlapping between face position and the face location marked determines that degree of overlapping is handed over and ratio is to measure, wherein hand over and compare
Candidate feature is judged greater than 0.6 for the feature of masked face, handed over and than judging candidate feature to be masked non-face less than 0.4
Feature.
4. the method as described in claim 1, which is characterized in that use greedy algorithm from reference face characteristics dictionary in step c)
The representative reference face characteristics dictionary of middle selection;The greedy algorithm refers to calculating with reference to each in face characteristics dictionary
With reference to the loss of face characteristic, obtain by the reference face feature list for losing ascending ascending order arrangement, before taking the list most
The reference face characteristic in face represents masked face characteristic;Wherein the loss refers to each reference face characteristic and masked face
The distance of the arest neighbors feature of characteristics dictionary and each arest neighbors feature with reference to face characteristic and masked non-face characteristics dictionary
Distance difference.
5. the method as described in claim 1, which is characterized in that using greedy algorithm from reference to non-face tagged word in step d)
It is selected in allusion quotation representative with reference to non-face characteristics dictionary;The greedy algorithm refers to that calculating refers to non-face characteristics dictionary
In each loss with reference to non-face feature, obtain taking by the non-face feature list of reference for losing the arrangement of ascending ascending order
The non-face feature of the reference of the list foremost represents masked non-face feature;Wherein the loss refers to each with reference to inhuman
Face feature is at a distance from the arest neighbors feature of masked non-face characteristics dictionary and each special with reference to non-face feature and masked face
Levy the difference of the distance of the arest neighbors feature of dictionary.
6. a kind of human face detection device, including candidate block, insertion module and authentication module;
The candidate block is used to carry out candidate face detection to image to be detected and extracts candidate feature;
The insertion module obtains tradition insertion feature or approximate insertion is special for carrying out insertion transformation to the candidate feature
Sign;
The authentication module is used to test above-mentioned tradition insertion feature or approximate insertion feature with regression algorithm by classifying
Card, to obtain testing result to the end;
Wherein, the insertion module carries out insertion change to the candidate feature by a surface space built in advance
After changing, tradition insertion feature or approximate insertion feature are obtained;The surface space is conventional external feature space or approximation
Insertion transformation described in surface space is locally linear embedding into method or quick approximation is locally linear embedding into method using traditional
It realizes;Traditional method that is locally linear embedding into carries out insertion change to the candidate feature with noise using conventional external feature space
It changes, obtains tradition insertion feature;Quick approximation is locally linear embedding into special to the candidate with noise using proximate exterior feature space
Sign carries out insertion transformation, obtains approximate insertion feature;
The quick approximation is locally linear embedding into the building method of proximate exterior feature space in method, comprising the following steps:
A) candidate face detection is carried out to the reference face data set marked and candidate feature is extracted, judge that candidate feature belongs to
These candidate features are stored in reference to face characteristics dictionary respectively and refer to non-face feature by face characteristic or non-face feature
Dictionary;
B) candidate face detection is carried out to the masked face data set marked and candidate feature is extracted, judge that candidate feature belongs to
These candidate features are stored in masked face characteristics dictionary and masked non-by masked face characteristic or masked non-face feature respectively
Face characteristic dictionary;
C) from above-mentioned with reference to selecting representative to represent above-mentioned masked face characteristics dictionary in face characteristics dictionary
With reference to face characteristics dictionary;
D) from above-mentioned with reference to selecting representative to represent above-mentioned masked non-face tagged word in non-face characteristics dictionary
The non-face characteristics dictionary of the reference of allusion quotation;
E) merging is above-mentioned representative with reference to face characteristics dictionary and representative with reference to non-face characteristics dictionary, obtains
To proximate exterior feature space.
7. device as claimed in claim 6, which is characterized in that the candidate block obtains multiple candidate features, then embedding
Enter after carrying out insertion transformation by a surface space built in advance in module, obtains tradition insertion feature or approximation
It is embedded in feature;The surface space is conventional external feature space or proximate exterior feature space;The insertion transformation is adopted
Method is locally linear embedding into or quick approximation is locally linear embedding into method and realizes with traditional.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710127367.3A CN107066943B (en) | 2017-03-06 | 2017-03-06 | A kind of method for detecting human face and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710127367.3A CN107066943B (en) | 2017-03-06 | 2017-03-06 | A kind of method for detecting human face and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107066943A CN107066943A (en) | 2017-08-18 |
CN107066943B true CN107066943B (en) | 2019-10-25 |
Family
ID=59622056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710127367.3A Active CN107066943B (en) | 2017-03-06 | 2017-03-06 | A kind of method for detecting human face and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107066943B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108563982B (en) * | 2018-01-05 | 2020-01-17 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting image |
CN110363126A (en) * | 2019-07-04 | 2019-10-22 | 杭州视洞科技有限公司 | A kind of plurality of human faces real-time tracking and out of kilter method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101202845B (en) * | 2007-11-14 | 2011-05-18 | 北京大学 | Method for changing infrared image into visible light image and device |
JP4791598B2 (en) * | 2008-09-17 | 2011-10-12 | 富士通株式会社 | Image processing apparatus and image processing method |
CN101393608A (en) * | 2008-11-04 | 2009-03-25 | 清华大学 | Visual object recognition method and apparatus based on manifold distance analysis |
CN101493885B (en) * | 2009-02-27 | 2012-01-04 | 中国人民解放军空军工程大学 | Embedded human face characteristic extracting method based on nuclear neighborhood protection |
CN104978549B (en) * | 2014-04-03 | 2019-04-02 | 北京邮电大学 | Three-dimensional face images feature extracting method and system |
US10410749B2 (en) * | 2014-10-21 | 2019-09-10 | uBiome, Inc. | Method and system for microbiome-derived characterization, diagnostics and therapeutics for cutaneous conditions |
CN105469063B (en) * | 2015-12-04 | 2019-03-05 | 苏州大学 | The facial image principal component feature extracting method and identification device of robust |
CN106056553B (en) * | 2016-05-31 | 2021-02-26 | 李炎然 | Image restoration method based on tight frame feature dictionary |
-
2017
- 2017-03-06 CN CN201710127367.3A patent/CN107066943B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107066943A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146921B (en) | Pedestrian target tracking method based on deep learning | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
CN105005772B (en) | A kind of video scene detection method | |
CN105335726B (en) | Recognition of face confidence level acquisition methods and system | |
Ming et al. | Simple triplet loss based on intra/inter-class metric learning for face verification | |
CN103218609B (en) | A kind of Pose-varied face recognition method based on hidden least square regression and device thereof | |
CN105975959A (en) | Face characteristic extraction modeling method based on neural network, face identification method, face characteristic extraction modeling device and face identification device | |
CN107977656A (en) | A kind of pedestrian recognition methods and system again | |
CN102254183B (en) | Face detection method based on AdaBoost algorithm | |
CN106156702A (en) | Identity identifying method and equipment | |
CN102034107B (en) | Unhealthy image differentiating method based on robust visual attention feature and sparse representation | |
CN103324938A (en) | Method for training attitude classifier and object classifier and method and device for detecting objects | |
CN105205449A (en) | Sign language recognition method based on deep learning | |
CN110070090A (en) | A kind of logistic label information detecting method and system based on handwriting identification | |
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN102629320A (en) | Ordinal measurement statistical description face recognition method based on feature level | |
Yao et al. | Sensing urban land-use patterns by integrating Google Tensorflow and scene-classification models | |
CN110188694B (en) | Method for identifying shoe wearing footprint sequence based on pressure characteristics | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN106022241A (en) | Face recognition method based on wavelet transformation and sparse representation | |
Liu et al. | TI2Net: temporal identity inconsistency network for deepfake detection | |
CN104978569A (en) | Sparse representation based incremental face recognition method | |
CN107066943B (en) | A kind of method for detecting human face and device | |
CN113095158A (en) | Handwriting generation method and device based on countermeasure generation network | |
Qin et al. | Finger-vein quality assessment based on deep features from grayscale and binary images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |