CN106372603A - Shielding face identification method and shielding face identification device - Google Patents
Shielding face identification method and shielding face identification device Download PDFInfo
- Publication number
- CN106372603A CN106372603A CN201610793013.8A CN201610793013A CN106372603A CN 106372603 A CN106372603 A CN 106372603A CN 201610793013 A CN201610793013 A CN 201610793013A CN 106372603 A CN106372603 A CN 106372603A
- Authority
- CN
- China
- Prior art keywords
- data
- training sample
- expression
- sample set
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a shielding face identification method and a shielding face identification device. The method comprises steps that a target shielding face image is acquired; the target shielding face image is pre-processed to acquire the sample data of the target shielding face image, and the sample data is taken as the test sample data; an estimate of a first expression coefficient of the test sample data on a training sample set is determined; according to the training sample set, the test sample data and the estimate of the first expression coefficient, a shielding mask and a recovery mask are determined; the no-shielding estimation data of the test sample data is determined according to the estimate of the first expression coefficient, the shielding mask and the recovery mask; an estimate of a second expression coefficient of the no-shielding estimation data on the training sample set is determined; an identity identification model is constructed according to the training sample set and the estimate of the second expression coefficient; the no-shielding estimation data is inputted to the identity identification model, and the identity information of the target shielding face image is acquired. Through the method, a shielding face can be simply, efficiently and accurately identified.
Description
Technical field
It relates to field of face identification, in particular it relates to one kind blocks face identification method and device.
Background technology
Recognition of face is the important component part in living things feature recognition field, in universality, unique and easy collection property
Aspect has certain advantage, has higher research value and market application foreground, has been developing progressively as current
One of most representative and challenging research contents in area of pattern recognition.Through years of researches, face recognition technology is
Through achieving plentiful and substantial achievement in research, but these achievements are mainly acquirement under the strictly limited environment of laboratory.With number
The application of code-phase machine, smart mobile phone and intelligent monitor system and popularization, the face under real life scene, no constraint environment is known
It is not increasingly becoming the focus of research.
In disclosed documents and materials, the research majority about recognition of face concentrates on illumination, expression and attitudes vibration
Research, to occlusion issue study less.And the problem of face partial occlusion is generally existing in real life scene,
It is the Important Problems in the recognition of face research no under constraint environment, process the occlusion issue in recognition of face and also increasingly cause and grind
The concern of the person of studying carefully.What the facial image that reality collects existed blocks, and such as glasses, scarf or some other larger area is dry
Disturb noise and can cause the imperfect of facial information, identification difficulty increases, and blocks and may exist in type, position, size
Different changes, is difficult to effectively occlusion area is modeled.Presently disclosed for the recognition of face side under obstruction conditions
Method substantially can be divided into two classes: the method based on partial analysis and the method based on statistical analysiss.Basic based on partial analysis
Thinking is to detect the occlusion area of facial image, reduces the weight of occlusion area in categorised decision, improves unobstructed
The weight in region.Such as be divided into different regions by blocking face, by set voting rule merge different piece
Join result, reach final identifying purpose.This kind of method depends on the Detection results of shelter in facial image, and face divides
Strategy also have ignored the internal relation of zones of different, therefore have some limitations.And the method based on statistical analysiss is led to
Cross and define some similarity measurements, capture significant local similarity, exclude unreliable or shield portions features as much as possible.
Additionally, another thinking of statistical analysiss be using face sample between statistical information, no hidden by existing by study mechanism
Keep off sample to reconstruct the face sample blocking.
Content of the invention
The purpose of the disclosure is to identify the big problem of difficulty for obstruction conditions human face, provides one kind to block recognition of face
Method and device.
To achieve these goals, the disclosure provides one kind to block face identification method, comprising:
Obtain target occlusion facial image;
Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image,
This sample data is as test sample data;
Determine the estimated value of the first expression coefficient on training sample set for the described test sample data, wherein, described instruction
Practice sample set and include the sample data that multiple given unobstructed facial images obtain after pretreatment;
According to the estimated value of described training sample set, described test sample data and described first expression coefficient, determine and hide
Gear mask and recovery mask;
According to described first expression coefficient estimated value, described block mask and described recovery mask, determine described test
The unobstructed estimated data of sample data;
Determine the estimated value of the second expression coefficient on described training sample set for the described unobstructed estimated data;
According to the estimated value of described training sample set and described second expression coefficient, build identification model;
Described unobstructed estimated data is inputted to described identification model, obtains described target occlusion facial image institute
The identity information of the personage representing.
Alternatively, the described estimated value determining the first expression coefficient on training sample set for the described test sample data,
Including:
Build the first expression model on described training sample set for the described test sample data;
Least square solution is asked to the described first expression model, obtains described test sample data on described training sample set
First expression coefficient estimated value.
Alternatively, described first expression model is:
Wherein, y represents described test sample data;D represents described training sample set;X represents described test sample data
The first expression coefficient on described training sample set;Represent the estimated value of described first expression coefficient;μ is default normal for first
Number, and μ > 0.
Alternatively, described estimating according to described training sample set, described test sample data and described first expression coefficient
Evaluation, determines and blocks mask and recover mask, comprising:
Build reconstructive residual error vector, wherein,Residual represents described reconstructive residual error vector;y
Represent described test sample data;D represents described training sample set;Represent the estimated value of described first expression coefficient;
According to described reconstructive residual error vector sum predetermined threshold value, described in determination, block mask and described recovery mask.
Alternatively, described according to described reconstructive residual error vector sum predetermined threshold value, block mask and described recovery described in determination
Mask, comprising:
Wherein, m1Mask is blocked described in expression;m2Represent described recovery mask;J represents m1、m2With pixel in residual
The index value of point;σ represents described predetermined threshold value.
Alternatively, described according to described first expression coefficient estimated value, described block mask and described recovery mask, really
The unobstructed estimated data of fixed described test sample data, comprising:
Wherein, m1Mask is blocked described in expression;m2Represent described recovery mask;Y represents described test sample data;D represents
Described training sample set;Represent the unobstructed estimated data of described test sample data.
Alternatively, described determine the second expression the estimating of coefficient on described training sample set for the described unobstructed estimated data
Evaluation, comprising:
Build the second expression model on described training sample set for the described unobstructed estimated data;
Least square solution is asked to the described second expression model, obtains described unobstructed estimated data in described training sample set
On second expression coefficient estimated value.
Alternatively, described second expression model is:
Wherein,Represent the unobstructed estimated data of described test sample data;D represents described training sample set;β represents
Second expression coefficient on described training sample set for the described unobstructed estimated data;Represent estimating of described second expression coefficient
Evaluation;η is the second preset constant, and η > 0.
Alternatively, described identification model is:
Wherein,Represent the unobstructed estimated data of described test sample data;diWithFor training sample set d and
Two represent in factor beta corresponding to the training sample subset of classification i and the estimated value of subrepresentation coefficient;Represent institute
State the identity information of the personage represented by target occlusion facial image.
The disclosure also provides one kind to block face identification device, comprising:
Acquisition module, is configured to obtain target occlusion facial image;
Pretreatment module, is configured to carry out pretreatment to described target occlusion facial image, obtains described target occlusion
The sample data of facial image, this sample data is as test sample data;
First determining module, is configured to determine that the first expression coefficient on training sample set for the described test sample data
Estimated value, wherein, described training sample set includes multiple giving the sample number that obtains after pretreatment of unobstructed facial images
According to;
Second determining module, is configured to according to described training sample set, described test sample data and described first table
Show the estimated value of coefficient, determine and block mask and recover mask;
3rd determining module, is configured to estimated value according to the described first expression coefficient, the described mask and described of blocking
Recover mask, determine the unobstructed estimated data of described test sample data;
4th determining module, is configured to determine that the second table on described training sample set for the described unobstructed estimated data
Show the estimated value of coefficient;
Identification model construction module, is configured to represent estimating of coefficient according to described training sample set and described second
Evaluation, builds identification model;
Identity information acquisition module, is configured to input described unobstructed estimated data to described identification model,
Obtain the identity information of the personage represented by described target occlusion facial image.
Alternatively, described first determining module includes:
First expression model construction submodule, is configured to build described test sample data on described training sample set
First expression model;
First expression coefficient determination sub-module, is configured to seek least square solution to the described first expression model, obtains institute
State the estimated value of the first expression coefficient on described training sample set for the test sample data.
Alternatively, described second determining module includes:
Reconstructive residual error vector builds submodule, is configured to build reconstructive residual error vector, wherein,
Represent described reconstructive residual error vector;Y represents described test sample data;D represents described training sample set;Represent described first
Represent the estimated value of coefficient;
Block mask determination sub-module, be configured to, according to described reconstructive residual error vector sum predetermined threshold value, determine described screening
Gear mask;
Recover mask determination sub-module, be configured to predetermined threshold value according to described reconstructive residual error vector sum, determine institute
State recovery mask.
Alternatively, described 4th determining module includes:
Second expression model construction submodule, is configured to build described unobstructed estimated data in described training sample set
On second expression model;
Second expression coefficient determination sub-module, is configured to seek least square solution to the described second expression model, obtains institute
State the estimated value of the second expression coefficient on described training sample set for the unobstructed estimated data.
The disclosure provide technique scheme in, using linear expression model extraction block face block mask and
Recover mask, and block mask and this recovery mask using this, the shield portions in target occlusion facial image are estimated,
Such that it is able to recover the picture material of shield portions, and then improve the accuracy rate blocking recognition of face.It is above-mentioned that the disclosure provides
Block face identification method and there is the simply efficient feature of calculating, the requirement to real-time for the recognition of face can be met.In addition, being directed to
Recognition of face under obstruction conditions, different from traditional method, this method (for example, need not hide with regard to the prior information of occlusion area
Connection of gear etc.), therefore, the scope of application is wider.Using the disclosure provide above-mentioned block face identification method, can simply,
Efficiently and accurately it is identified to blocking face.
Other feature and advantage of the disclosure will be described in detail in subsequent specific embodiment part.
Brief description
Accompanying drawing is used to provide further understanding of the disclosure, and constitutes the part of description, with following tool
Body embodiment is used for explaining the disclosure together, but does not constitute restriction of this disclosure.In the accompanying drawings:
Fig. 1 is a kind of flow chart blocking face identification method shown in an exemplary embodiment.
Fig. 2 a to Fig. 2 e is shown and is shown come the process that target occlusion facial image is identified using the method shown in Fig. 1
It is intended to.
Fig. 3 is a kind of block diagram blocking face identification device shown in an exemplary embodiment.
Fig. 4 is a kind of block diagram blocking face identification device shown in an exemplary embodiment.
Specific embodiment
It is described in detail below in conjunction with accompanying drawing specific embodiment of this disclosure.It should be appreciated that this place is retouched
The specific embodiment stated is merely to illustrate and explains the disclosure, is not limited to the disclosure.
Fig. 1 is a kind of flow chart blocking face identification method shown in an exemplary embodiment, is applied to electronic equipment.
As shown in figure 1, the method may comprise steps of.
In a step 101, obtain target occlusion facial image.In the disclosure, target occlusion facial image refers to wait to know
Facial image under other obstruction conditions.Electronic equipment can obtain target occlusion facial image in several ways.For example,
Target occlusion facial image can be obtained by photographic head on an electronic device by setting, or obtain mesh from this map office
Mark blocks facial image, or obtains this target occlusion facial image from another electronic equipment, etc..In addition, circumstance of occlusion can
For example to include but is not limited to: glasses, scarf, medicated cap, mask etc. block to face.
In a step 102, pretreatment is carried out to target occlusion facial image, obtain the sample number of target occlusion facial image
According to this sample data is as test sample data.
Illustratively, the process that target occlusion facial image is carried out with pretreatment is as follows: first in target occlusion facial image
On, sheared centered on eyes and registration process, and done histogram equalization, by the target occlusion face figure after equalization
The data matrix of picture becomes column vector by flattening operations, is l2Norm normalized, obtains target occlusion facial image
Sample data, this sample data is as test sample data y.
In step 103, determine the estimated value of the first expression coefficient on training sample set for the test sample data, its
In, training sample set includes the sample data that multiple given unobstructed facial images obtain after pretreatment.
Illustratively, the process carrying out pretreatment to multiple given unobstructed facial images is as follows: given first against each
Unobstructed facial image, is sheared centered on eyes and registration process, and is done histogram equalization, by each through equilibrium
The data matrix of the given unobstructed facial image after change process becomes column vector by flattening operations, is l2At norm normalization
Reason, obtains corresponding sample data, these sample datas composing training sample set d, and wherein, in d, every string represents a training
Sample.
When determining the estimated value of the first expression coefficient on training sample set for the test sample data, can build first
First expression model on training sample set for the test sample data, wherein, first represents that model is for example as follows:
Wherein, y represents test sample data;D represents training sample set;X represents test sample data in training sample set
On first expression coefficient;Represent the estimated value of the first expression coefficient;μ is the first preset constant, for balancingWithBoth weight relationships, and μ > 0.
Next, asking least square solution (that is, to solve l on the first expression model2Norm constraint least square problem), obtain
The estimated value of the first expression coefficient on training sample set for the test sample data, as follows:
Wherein, i is unit matrix, and size is the columns of training sample set d.
At step 104, the estimated value according to training sample set, test sample data and the first expression coefficient, determines and hides
Gear mask and recovery mask.
Illustratively, the estimated value of coefficient can be represented first according to training sample set, test sample data and first, build
Reconstructive residual error vector, wherein,Residual represents reconstructive residual error vector;Y represents test sample number
According to;D represents training sample set;Represent the estimated value of the first expression coefficient, i.e. the estimated value that above-mentioned equation (2) is drawn.
Next, carrying out thresholding operation to rebuilding residual vector, i.e. according to reconstructive residual error vector sum predetermined threshold value, really
Surely block mask and recover mask.Illustratively, can determine in the following way and block mask and recover mask:
Wherein, m1Represent and block mask;m2Represent and recover mask;J represents m1、m2Index with pixel in residual
Value;σ represents predetermined threshold value, and, σ > 0, illustratively, σ can between [0.003,0.006] value.
Obtain blocking mask m using equation (3) and (4)1With recovery mask m2, this blocks mask m1With recovery mask m2For
Two-value 0/1 mask vector, and block mask m1With recovery mask m2Relation contrary each other.Wherein, m1In 0 value represent estimate
Target occlusion facial image shield portions, 1 value represent target occlusion facial image unobstructed part.m2In 1 value represent
The shield portions of the target occlusion facial image estimated, 0 value represents the unobstructed part of target occlusion facial image.
Next, in step 105, represent the estimated value of coefficient, block mask and recover mask according to first, determining and survey
The unobstructed estimated data of sample notebook data.By step 105, the shield portions to target occlusion facial image can be completed
The recovery of picture material.
Illustratively, the solution that can be obtained using equation (2) firstWith training sample set d, draw estimation face vectorNext, using recovery mask m2Estimate the shield portions of test sample data y, simultaneously using blocking mask m1Extract and survey
The unobstructed part of sample notebook data y, thus obtain the unobstructed estimated data of test sample data yAs follows:
In equation (5), diag (m1) and diag (m2) operation be by vectorial m respectively1And m2It is converted into diagonal matrix, wherein,
diag(m1) matrix diagonals element is set to m1Value, its residual value be 0;diag(m2) matrix diagonals element is set to m2Value, its residual value
For 0.
Next, in step 106, determine estimating of the second expression coefficient on training sample set for the unobstructed estimated data
Evaluation.
Illustratively, the second expression model on training sample set for the unobstructed estimated data can be built first, wherein, the
Two represent that model is as follows:
Wherein,Represent the unobstructed estimated data of test sample data;D represents training sample set;β represents unobstructed and estimates
Count the second expression coefficient on training sample set;Represent the estimated value of the second expression coefficient;η is default normal for second
Number, for balancingWithBoth weight relationships, and η > 0.
Next, asking least square solution (that is, to solve l on the second expression model2Norm constraint least square problem), obtain
The estimated value of the second expression coefficient on training sample set for the unobstructed estimated data, as shown below:
Wherein, i is unit matrix, and size is the columns of training sample set d.
In step 107, the estimated value according to training sample set and the second expression coefficient, builds identification model.
Illustratively, identification model is:
Wherein,Represent the unobstructed estimated data of test sample data;diWithFor training sample set d and the second table
Show and in factor beta, correspond to the training sample subset of classification i and the estimated value of subrepresentation coefficient;Represent that target hides
The identity information of the personage represented by gear facial image.
In step 108, unobstructed estimated data is inputted to identification model (that is, inputting to above-mentioned equation (8)),
Obtain the identity information of the personage represented by target occlusion facial image.
Fig. 2 a to Fig. 2 e shows using said method come the process schematic that target occlusion facial image is identified.
First, target occlusion facial image is as shown in Figure 2 a.After step 102, step 103 and step 104, permissible
Obtain corresponding block mask and recover mask, wherein, block mask as shown in Figure 2 b, recover mask as shown in Figure 2 c.?
To after block mask and recover mask, the picture material of shield portions is recovered, i.e. execution step 105 and step
106, result as shown in Figure 2 d, obtains the face after blocking recovery.Finally, execution step 108, are identified to face, result
As shown in Figure 2 e, obtain the identity information of the personage represented by target occlusion facial image.
The said method that the disclosure provides is applied to test facial image presence and blocks, and the unscreened feelings of training sample
Shape.Blocked using test sample, and training sample this diversity unobstructed, to detect occlusion area, and then can recover
The picture material of occlusion area.In above-mentioned the blocking in face identification method of disclosure offer, using linear expression model extraction
Block blocking mask and recovering mask of face, and block mask and this recovery mask using this, to target occlusion facial image
In shield portions estimated, such that it is able to recover the picture material of shield portions, and then improve the standard blocking recognition of face
Really rate.The above-mentioned face identification method that blocks that the disclosure provides has the simply efficient feature of calculating, can meet recognition of face pair
The requirement of real-time.In addition, for the recognition of face under obstruction conditions, different from traditional method, this method need not be with regard to blocking
The prior information (connection for example, blocked etc.) in region, therefore, the scope of application is wider.Above-mentioned blocking using disclosure offer
Face identification method, can simply, efficiently and accurately to blocking face be identified.
Fig. 3 is a kind of block diagram blocking face identification device 300 shown in an exemplary embodiment.As shown in figure 3, this dress
Put 300 and may include that acquisition module 301, be configured to obtain target occlusion facial image;Pretreatment module 302, is configured to
Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image, this sample number
According to as test sample data;First determining module 303, is configured to determine that described test sample data on training sample set
First expression coefficient estimated value, wherein, described training sample set include multiple give unobstructed facial images preprocessed
The sample data obtaining afterwards;Second determining module 304, is configured to according to described training sample set, described test sample data
With the estimated value of the described first expression coefficient, determine and block mask and recover mask;3rd determining module 305, is configured to root
According to described first expression coefficient estimated value, described block mask and described recovery mask, determine described test sample data
Unobstructed estimated data;4th determining module 306, is configured to determine that described unobstructed estimated data in described training sample set
On second expression coefficient estimated value;Identification model construction module 307, be configured to according to described training sample set and
The estimated value of described second expression coefficient, builds identification model;Identity information acquisition module 308, being configured to will be described
Unobstructed estimated data inputs the body obtaining the personage represented by described target occlusion facial image to described identification model
Part information.
Alternatively, described first determining module 303 may include that the first expression model construction submodule, is configured to structure
Build the first expression model on described training sample set for the described test sample data;First expression coefficient determination sub-module, quilt
It is configured to seek least square solution to the described first expression model, obtain described test sample data on described training sample set
The estimated value of the first expression coefficient.
Alternatively, described second determining module 304 may include that reconstructive residual error vector builds submodule, is configured to structure
Build reconstructive residual error vector;Block mask determination sub-module, be configured to, according to described reconstructive residual error vector sum predetermined threshold value, determine
Described block mask;Recover mask determination sub-module, be configured to predetermined threshold value according to described reconstructive residual error vector sum, really
Fixed described recovery mask.
Alternatively, described 4th determining module 306 may include that the second expression model construction submodule, is configured to root
Block mask according to described, build the second expression model on training sample set for the test sample data;Second represents that coefficient determines
Submodule, is configured to seek least square solution to the described second expression model, obtains described test sample data in described training
The estimated value of the second expression coefficient on sample set.
With regard to the device in above-described embodiment, wherein the concrete mode of modules execution operation is in relevant the method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 4 is a kind of block diagram blocking face identification device 400 shown in an exemplary embodiment, and this device 400 is permissible
It is electronic equipment, such as mobile terminal, personal computer, server etc..As shown in figure 4, this device 400 may include that processor
401, memorizer 402, multimedia groupware 403, input/output (i/o) interface 404, communication component 405 and video capture assembly
406.
Wherein, processor 401 is used for controlling the integrated operation of this device 400, to complete above-mentioned to block recognition of face side
All or part of step in method.Memorizer 402 is used for storing various types of data to support the operation in this device 400,
The instruction that for example can include for any application program of operation or method on this device 400 of these data, Yi Jiying
With the related data of program, such as contact data, the message of transmitting-receiving, picture, audio frequency, video etc..This memorizer 402 is permissible
Realized by any kind of volatibility or non-volatile memory device or combinations thereof, such as static RAM
(static random access memory, abbreviation sram), Electrically Erasable Read Only Memory (electrically
Erasable programmable read-only memory, abbreviation eeprom), Erasable Programmable Read Only Memory EPROM
(erasable programmable read-only memory, abbreviation eprom), programmable read only memory
(programmable read-only memory, abbreviation prom), and read only memory (read-only memory, referred to as
Rom), magnetic memory, flash memory, disk or CD.
Multimedia groupware 403 can include screen and audio-frequency assembly.Wherein screen can be for example touch screen, audio-frequency assembly
For output and/or input audio signal.For example, audio-frequency assembly can include a mike, and mike is used for receiving outside
Audio signal.The audio signal being received can be further stored in memorizer 402 or be sent by communication component 405.Sound
Frequency assembly also includes at least one speaker, for exports audio signal.I/o interface 404 is processor 401 and other interface moulds
Interface is provided, other interface modules above-mentioned can be keyboard, mouse, button etc. between block.These buttons can be virtual push button
Or entity button.
Communication component 405 is used for carrying out wired or wireless communication between this device 400 and other equipment.Radio communication, example
As wi-fi, bluetooth, near-field communication (near field communication, abbreviation nfc), 2g, 3g or 4g, or in them
The combination of one or more, this communication component 405 therefore corresponding may include that wi-fi module, bluetooth module, nfc module.
Video capture assembly 406 may include the modules such as photographic head, signal processing, for gathering video image.
In one exemplary embodiment, device 400 can be by one or more application specific integrated circuits
(application specific integrated circuit, abbreviation asic), digital signal processor (digital
Signal processor, abbreviation dsp), digital signal processing appts (digital signal processing device,
Abbreviation dspd), PLD (programmable logic device, abbreviation pld), field programmable gate array
(field programmable gate array, abbreviation fpga), controller, microcontroller, microprocessor or other electronics unit
Part is realized, and above-mentioned blocks face identification method for executing.
In a further exemplary embodiment, additionally provide a kind of non-transitory computer-readable storage medium including instruction
Matter, for example, include the memorizer 402 instructing, and above-mentioned instruction can be executed by the processor 401 of device 400 to complete above-mentioned blocking
Face identification method.Illustratively, this non-transitorycomputer readable storage medium can be rom, random access memory
(random access memory, abbreviation ram), cd-rom, tape, floppy disk and optical data storage devices etc..
Any process described otherwise above or method description in flow chart or in embodiment of the disclosure can be by
It is interpreted as, represent the code of the executable instruction including one or more steps for realizing specific logical function or process
Module, fragment or part, and the scope of disclosure embodiment includes other realization, wherein can not press shown or
Discuss order, including according to involved function by substantially simultaneously in the way of or in the opposite order, carry out perform function, this should
Described in embodiment of the disclosure, those skilled in the art understand.
Describe the preferred implementation of the disclosure above in association with accompanying drawing in detail, but, the disclosure is not limited to above-mentioned reality
Apply the detail in mode, in the range of the technology design of the disclosure, multiple letters can be carried out with technical scheme of this disclosure
Monotropic type, these simple variant belong to the protection domain of the disclosure.
It is further to note that each particular technique feature described in above-mentioned specific embodiment, in not lance
In the case of shield, can be combined by any suitable means.In order to avoid unnecessary repetition, the disclosure to various can
The compound mode of energy no longer separately illustrates.
Additionally, combination in any can also be carried out between the various different embodiment of the disclosure, as long as it is without prejudice to this
Disclosed thought, it equally should be considered as disclosure disclosure of that.
Claims (13)
1. one kind blocks face identification method it is characterised in that including:
Obtain target occlusion facial image;
Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image, this sample
Notebook data is as test sample data;
Determine the estimated value of the first expression coefficient on training sample set for the described test sample data, wherein, described training sample
This collection includes the sample data that multiple given unobstructed facial images obtain after pretreatment;
According to the estimated value of described training sample set, described test sample data and described first expression coefficient, determine to block and cover
Film and recovery mask;
According to described first expression coefficient estimated value, described block mask and described recovery mask, determine described test sample
The unobstructed estimated data of data;
Determine the estimated value of the second expression coefficient on described training sample set for the described unobstructed estimated data;
According to the estimated value of described training sample set and described second expression coefficient, build identification model;
Described unobstructed estimated data is inputted to described identification model, obtains represented by described target occlusion facial image
Personage identity information.
2. method according to claim 1 is it is characterised in that described determination described test sample data is in training sample set
On first expression coefficient estimated value, comprising:
Build the first expression model on described training sample set for the described test sample data;
Least square solution is asked to the described first expression model, obtains described test sample data on described training sample set the
The estimated value of one expression coefficient.
3. method according to claim 2 is it is characterised in that described first expression model is:
Wherein, y represents described test sample data;D represents described training sample set;X represents described test sample data in institute
State the first expression coefficient on training sample set;Represent the estimated value of described first expression coefficient;μ is the first preset constant, and
And μ > 0.
4. method according to claim 1 it is characterised in that described according to described training sample set, described test sample
Data and the estimated value of described first expression coefficient, determine and block mask and recover mask, comprising:
Build reconstructive residual error vector, wherein,Residual represents described reconstructive residual error vector;Y represents
Described test sample data;D represents described training sample set;Represent the estimated value of described first expression coefficient;
According to described reconstructive residual error vector sum predetermined threshold value, described in determination, block mask and described recovery mask.
5. method according to claim 4 it is characterised in that described according to described reconstructive residual error vector sum predetermined threshold value,
Mask and described recovery mask is blocked described in determination, comprising:
Wherein, m1Mask is blocked described in expression;m2Represent described recovery mask;J represents m1、m2Rope with pixel in residual
Draw value;σ represents described predetermined threshold value.
6. method according to claim 1 it is characterised in that described according to the described first expression estimated value of coefficient, institute
State and block mask and described recovery mask, determine the unobstructed estimated data of described test sample data, comprising:
Wherein, m1Mask is blocked described in expression;m2Represent described recovery mask;Y represents described test sample data;D represents described
Training sample set;Represent the unobstructed estimated data of described test sample data.
7. method according to claim 1 is it is characterised in that the described unobstructed estimated data of described determination is in described training
The estimated value of the second expression coefficient on sample set, comprising:
Build the second expression model on described training sample set for the described unobstructed estimated data;
Least square solution is asked to the described second expression model, obtains described unobstructed estimated data on described training sample set
The estimated value of the second expression coefficient.
8. method according to claim 7 is it is characterised in that described second expression model is:
Wherein,Represent the unobstructed estimated data of described test sample data;D represents described training sample set;β represents described
Second expression coefficient on described training sample set for the unobstructed estimated data;Represent the estimation of described second expression coefficient
Value;η is the second preset constant, and η > 0.
9. method according to claim 1 is it is characterised in that described identification model is:
Wherein,Represent the unobstructed estimated data of described test sample data;diWithFor training sample set d and the second table
Show and in factor beta, correspond to the training sample subset of classification i and the estimated value of subrepresentation coefficient;Represent described mesh
Mark blocks the identity information of the personage represented by facial image.
10. one kind blocks face identification device it is characterised in that including:
Acquisition module, is configured to obtain target occlusion facial image;
Pretreatment module, is configured to carry out pretreatment to described target occlusion facial image, obtains described target occlusion face
The sample data of image, this sample data is as test sample data;
First determining module, is configured to determine that estimating of the first expression coefficient on training sample set for the described test sample data
Evaluation, wherein, described training sample set includes the sample data that multiple given unobstructed facial images obtain after pretreatment;
Second determining module, is configured to represent system according to described training sample set, described test sample data and described first
The estimated value of number, determines and blocks mask and recover mask;
3rd determining module, is configured to the estimated value according to the described first expression coefficient, described blocks mask and described recovery
Mask, determines the unobstructed estimated data of described test sample data;
4th determining module, is configured to determine that the second expression system on described training sample set for the described unobstructed estimated data
The estimated value of number;
Identification model construction module, is configured to the estimation according to described training sample set and described second expression coefficient
Value, builds identification model;
Identity information acquisition module, is configured to input described unobstructed estimated data to described identification model, obtains
The identity information of the personage represented by described target occlusion facial image.
11. devices according to claim 10 are it is characterised in that described first determining module includes:
First expression model construction submodule, is configured to build described test sample data on described training sample set the
Faithful representation module type;
First expression coefficient determination sub-module, is configured to seek least square solution to the described first expression model, obtains described survey
The estimated value of the first expression coefficient on described training sample set for the sample notebook data.
12. devices according to claim 10 are it is characterised in that described second determining module includes:
Reconstructive residual error vector builds submodule, is configured to build reconstructive residual error vector, wherein,
Residual represents described reconstructive residual error vector;Y represents described test sample data;D represents described training sample set;Represent
The estimated value of described first expression coefficient;
Block mask determination sub-module, be configured to according to described reconstructive residual error vector sum predetermined threshold value, block described in determination and cover
Film;
Recover mask determination sub-module, be configured to predetermined threshold value according to described reconstructive residual error vector sum, determine described extensive
Multiple mask.
13. devices according to claim 10 are it is characterised in that described 4th determining module includes:
Second expression model construction submodule, is configured to build described unobstructed estimated data on described training sample set
Second expression model;
Second expression coefficient determination sub-module, is configured to seek least square solution to the described second expression model, obtains described nothing
Block the estimated value of the second expression coefficient on described training sample set for the estimated data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610793013.8A CN106372603A (en) | 2016-08-31 | 2016-08-31 | Shielding face identification method and shielding face identification device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610793013.8A CN106372603A (en) | 2016-08-31 | 2016-08-31 | Shielding face identification method and shielding face identification device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106372603A true CN106372603A (en) | 2017-02-01 |
Family
ID=57899715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610793013.8A Pending CN106372603A (en) | 2016-08-31 | 2016-08-31 | Shielding face identification method and shielding face identification device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106372603A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169447A (en) * | 2017-05-12 | 2017-09-15 | 贵州中信云联科技有限公司 | Hospital self-service system based on recognition of face |
CN107862270A (en) * | 2017-10-31 | 2018-03-30 | 深圳云天励飞技术有限公司 | Face classification device training method, method for detecting human face and device, electronic equipment |
CN109086752A (en) * | 2018-09-30 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN110399764A (en) * | 2018-04-24 | 2019-11-01 | 华为技术有限公司 | Face identification method, device and computer-readable medium |
CN110705337A (en) * | 2018-07-10 | 2020-01-17 | 普天信息技术有限公司 | Face recognition method and device aiming at glasses shielding |
CN111385514A (en) * | 2020-02-18 | 2020-07-07 | 华为技术有限公司 | Portrait processing method and device and terminal |
CN111814603A (en) * | 2020-06-23 | 2020-10-23 | 汇纳科技股份有限公司 | Face recognition method, medium and electronic device |
CN111931628A (en) * | 2020-08-04 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Training method and device of face recognition model and related equipment |
CN113298808A (en) * | 2021-06-22 | 2021-08-24 | 哈尔滨工程大学 | Method for repairing building shielding information in tilt-oriented remote sensing image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799872A (en) * | 2012-07-17 | 2012-11-28 | 西安交通大学 | Image processing method based on face image characteristics |
CN103353936A (en) * | 2013-07-26 | 2013-10-16 | 上海交通大学 | Method and system for face identification |
CN105095856A (en) * | 2015-06-26 | 2015-11-25 | 上海交通大学 | Method for recognizing human face with shielding based on mask layer |
-
2016
- 2016-08-31 CN CN201610793013.8A patent/CN106372603A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799872A (en) * | 2012-07-17 | 2012-11-28 | 西安交通大学 | Image processing method based on face image characteristics |
CN103353936A (en) * | 2013-07-26 | 2013-10-16 | 上海交通大学 | Method and system for face identification |
CN105095856A (en) * | 2015-06-26 | 2015-11-25 | 上海交通大学 | Method for recognizing human face with shielding based on mask layer |
Non-Patent Citations (2)
Title |
---|
蔡家柱: ""基于稀疏表达的人脸识别算法研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈书杨: ""局部信息缺失情况下人脸识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169447A (en) * | 2017-05-12 | 2017-09-15 | 贵州中信云联科技有限公司 | Hospital self-service system based on recognition of face |
CN107862270A (en) * | 2017-10-31 | 2018-03-30 | 深圳云天励飞技术有限公司 | Face classification device training method, method for detecting human face and device, electronic equipment |
CN110399764A (en) * | 2018-04-24 | 2019-11-01 | 华为技术有限公司 | Face identification method, device and computer-readable medium |
CN110705337A (en) * | 2018-07-10 | 2020-01-17 | 普天信息技术有限公司 | Face recognition method and device aiming at glasses shielding |
CN109086752A (en) * | 2018-09-30 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN111385514B (en) * | 2020-02-18 | 2021-06-29 | 华为技术有限公司 | Portrait processing method and device and terminal |
CN111385514A (en) * | 2020-02-18 | 2020-07-07 | 华为技术有限公司 | Portrait processing method and device and terminal |
CN111814603A (en) * | 2020-06-23 | 2020-10-23 | 汇纳科技股份有限公司 | Face recognition method, medium and electronic device |
CN111814603B (en) * | 2020-06-23 | 2023-09-05 | 汇纳科技股份有限公司 | Face recognition method, medium and electronic equipment |
CN111931628A (en) * | 2020-08-04 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Training method and device of face recognition model and related equipment |
CN111931628B (en) * | 2020-08-04 | 2023-10-24 | 腾讯科技(深圳)有限公司 | Training method and device of face recognition model and related equipment |
CN113298808A (en) * | 2021-06-22 | 2021-08-24 | 哈尔滨工程大学 | Method for repairing building shielding information in tilt-oriented remote sensing image |
CN113298808B (en) * | 2021-06-22 | 2022-03-18 | 哈尔滨工程大学 | Method for repairing building shielding information in tilt-oriented remote sensing image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106372603A (en) | Shielding face identification method and shielding face identification device | |
CN106372595A (en) | Shielded face identification method and device | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
CN105426857B (en) | Human face recognition model training method and device | |
KR101870689B1 (en) | Method for providing information on scalp diagnosis based on image | |
CN104424466B (en) | Method for checking object, body detection device and image pick up equipment | |
TWI766201B (en) | Methods and devices for biological testing and storage medium thereof | |
US20190362144A1 (en) | Eyeball movement analysis method and device, and storage medium | |
CN104077597B (en) | Image classification method and device | |
CN109657716A (en) | A kind of vehicle appearance damnification recognition method based on deep learning | |
CN105999670A (en) | Shadow-boxing movement judging and guiding system based on kinect and guiding method adopted by same | |
CN108010060A (en) | Object detection method and device | |
CN110889334A (en) | Personnel intrusion identification method and device | |
CN105678242B (en) | Focusing method and device under hand-held certificate mode | |
CN106228556A (en) | Image quality analysis method and device | |
CN104281839A (en) | Body posture identification method and device | |
CN108960145A (en) | Facial image detection method, device, storage medium and electronic equipment | |
CN107944447A (en) | Image classification method and device | |
CN105095860B (en) | character segmentation method and device | |
CN105335684A (en) | Face detection method and device | |
CN109670458A (en) | A kind of licence plate recognition method and device | |
CN106557759A (en) | A kind of sign board information getting method and device | |
CN109360197A (en) | Processing method, device, electronic equipment and the storage medium of image | |
CN107463903A (en) | Face key independent positioning method and device | |
CN111435422B (en) | Action recognition method, control method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170201 |