CN106372595A - Shielded face identification method and device - Google Patents

Shielded face identification method and device Download PDF

Info

Publication number
CN106372595A
CN106372595A CN201610782259.5A CN201610782259A CN106372595A CN 106372595 A CN106372595 A CN 106372595A CN 201610782259 A CN201610782259 A CN 201610782259A CN 106372595 A CN106372595 A CN 106372595A
Authority
CN
China
Prior art keywords
sample data
training sample
expression
test sample
sample set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610782259.5A
Other languages
Chinese (zh)
Inventor
谭晓衡
郭坦
杨卓
陈涛
张啸梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Chongqing University
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University, Shenzhen Tinno Wireless Technology Co Ltd filed Critical Chongqing University
Priority to CN201610782259.5A priority Critical patent/CN106372595A/en
Publication of CN106372595A publication Critical patent/CN106372595A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to a shielded face identification method and device. The method comprises: obtaining a target shielded face image; performing preprocessing of the target shielded face image, obtaining the sample data of the target shielded face image, and taking the sample data as the test sample data; determining the estimated value of the first expression coefficient of the test sample data on a training sample set; determining a shielding mark according to the training sample set, the test sample data and the estimated value of the first expression coefficient; determining the estimated value of the second expression coefficient of the test sample data on the training sample set according to the shielding mask; constructing an identity identification model according to the training sample set, the shielding mask and the estimated value of the second expression coefficient; and inputting the test sample data to the identity identification model, and obtaining the identity information of a figure expressed by the target shielded face image. The shielded face identification method and device can simply, efficiently and accurately identify the shielded face.

Description

Block face identification method and device
Technical field
It relates to field of face identification, in particular it relates to one kind blocks face identification method and device.
Background technology
Recognition of face is the important component part in living things feature recognition field, in universality, unique and easy collection property Aspect has certain advantage, has higher research value and market application foreground, has been developing progressively as current One of most representative and challenging research contents in area of pattern recognition.Through years of researches, face recognition technology is Through achieving plentiful and substantial achievement in research, but these achievements are mainly acquirement under the strictly limited environment of laboratory.With number The application of code-phase machine, smart mobile phone and intelligent monitor system and popularization, the face under real life scene, no constraint environment is known It is not increasingly becoming the focus of research.
In disclosed documents and materials, the research majority about recognition of face concentrates on illumination, expression and attitudes vibration Research, to occlusion issue study less.And the problem of face partial occlusion is generally existing in real life scene, It is the Important Problems in the recognition of face research no under constraint environment, process the occlusion issue in recognition of face and also increasingly cause and grind The concern of the person of studying carefully.What the facial image that reality collects existed blocks, and such as glasses, scarf or some other larger area is dry Disturb noise and can cause the imperfect of facial information, identification difficulty increases, and blocks and may exist in type, position, size Different changes, is difficult to effectively occlusion area is modeled.Presently disclosed for the recognition of face side under obstruction conditions Method substantially can be divided into two classes: the method based on partial analysis and the method based on statistical analysiss.Basic based on partial analysis Thinking is to detect the occlusion area of facial image, reduces the weight of occlusion area in categorised decision, improves unobstructed The weight in region.Such as be divided into different regions by blocking face, by set voting rule merge different piece Join result, reach final identifying purpose.This kind of method depends on the Detection results of shelter in facial image, and face divides Strategy also have ignored the internal relation of zones of different, therefore have some limitations.And the method based on statistical analysiss is led to Cross and define some similarity measurements, capture significant local similarity, exclude unreliable or shield portions features as much as possible. Additionally, another thinking of statistical analysiss be using face sample between statistical information, no hidden by existing by study mechanism Keep off sample to reconstruct the face sample blocking.
Content of the invention
The purpose of the disclosure is to identify the big problem of difficulty for obstruction conditions human face, provides one kind to block recognition of face Method and device.
To achieve these goals, this offer one kind blocks face identification method, comprising:
Obtain target occlusion facial image;
Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image, This sample data is as test sample data;
Determine the estimated value of the first expression coefficient on training sample set for the described test sample data, wherein, described instruction Practice sample set and include the sample data that multiple given unobstructed facial images obtain after pretreatment;
According to the estimated value of described training sample set, described test sample data and described first expression coefficient, determine and hide Gear mask;
Block mask according to described, determine the second expression coefficient on described training sample set for the described test sample data Estimated value;
According to described training sample set, the described estimated value blocking mask and described second expression coefficient, build identity and know Other model;
By described test sample data input to described identification model, obtain described target occlusion facial image institute table The identity information of the personage showing.
Alternatively, the described estimated value determining the first expression coefficient on training sample set for the described test sample data, Including:
Build the first expression model on described training sample set for the described test sample data;
Least square solution is asked to the described first expression model, obtains described test sample data on described training sample set First expression coefficient estimated value.
Alternatively, described first expression model is:
x ^ = arg min x { | | y - d x | | 2 2 + μ | | x | | 2 2 }
Wherein, y represents described test sample data;D represents described training sample set;X represents described test sample data The first expression coefficient on described training sample set;Represent the estimated value of described first expression coefficient;μ is default normal for first Number, and μ > 0.
Alternatively, described estimating according to described training sample set, described test sample data and described first expression coefficient Evaluation, determines and blocks mask, comprising:
Build reconstructive residual error vector, wherein,Residual represents described reconstructive residual error vector;y Represent described test sample data;D represents described training sample set;Represent the estimated value of described first expression coefficient;
According to described reconstructive residual error vector sum predetermined threshold value, described in determination, block mask.
Alternatively, described according to described reconstructive residual error vector sum predetermined threshold value, block mask described in determination, comprising:
m 1 ( j ) = 1 , i f re s i d u a l ( j ) < &sigma; 0 , o t h e r w i s e
Wherein, m1Mask is blocked described in expression;J represents m1Index value with pixel in residual;σ represents described pre- If threshold value.
Alternatively, block mask described in described basis, determine described test sample data on described training sample set The estimated value of the second expression coefficient, comprising:
Block mask according to described, build the second expression mould on described training sample set for the described test sample data Type;
Least square solution is asked to the described second expression model, obtains described test sample data on described training sample set Second expression coefficient estimated value.
Alternatively, described second expression model is:
&alpha; ^ = arg min &alpha; { 1 2 | | d i a g ( m 1 ) ( y - d &alpha; ) | | 2 2 + &lambda; | | &alpha; | | 2 2 }
Wherein, m1Mask is blocked described in expression;Y represents described test sample data;D represents described training sample set;α table Show the second expression coefficient on described training sample set for the described test sample data;Represent estimating of described second expression coefficient Evaluation;λ is the second preset constant, and λ > 0.
Alternatively, described identification model is:
i d e n t i t y ( y ) = arg min i { | | m 1 ( y - d i &alpha; i ^ ) | | 2 / | | &alpha; i ^ | | 2 }
Wherein, y represents described test sample data;diWithRepresent corresponding in factor alpha for training sample set d and second In the training sample subset of classification i and the estimated value of subrepresentation coefficient;m1=diag (m1), m1Mask is blocked described in expression;m1 For m1Corresponding diagonal matrix;Identity (y) represents the identity information of the personage represented by described target occlusion facial image.
The disclosure also provides one kind to block face identification device, comprising:
Acquisition module, is configured to obtain target occlusion facial image;
Pretreatment module, is configured to carry out pretreatment to described target occlusion facial image, obtains described target occlusion The sample data of facial image, this sample data is as test sample data;
First determining module, is configured to determine that the first expression coefficient on training sample set for the described test sample data Estimated value, wherein, described training sample set includes multiple giving the sample number that obtains after pretreatment of unobstructed facial images According to;
Second determining module, is configured to according to described training sample set, described test sample data and described first table Show the estimated value of coefficient, determine and block mask;
3rd determining module, is configured to block mask according to described, determines described test sample data in described training The estimated value of the second expression coefficient on sample set;
Identification model construction module, is configured to according to described training sample set, described blocks mask and described The estimated value of two expression coefficients, builds identification model;
Identity information acquisition module, is configured to, by described test sample data input to described identification model, obtain Take the identity information of the personage represented by described target occlusion facial image.
Alternatively, described first determining module includes:
First expression model construction submodule, is configured to build described test sample data on described training sample set First expression model;
First expression coefficient determination sub-module, is configured to seek least square solution to the described first expression model, obtains institute State the estimated value of the first expression coefficient on described training sample set for the test sample data.
Alternatively, described second determining module includes:
Reconstructive residual error vector builds submodule, is configured to build reconstructive residual error vector, wherein,Residual represents described reconstructive residual error vector;Y represents described test sample data;D represents described Training sample set;Represent the estimated value of described first expression coefficient;
Block mask determination sub-module, be configured to, according to described reconstructive residual error vector sum predetermined threshold value, determine described screening Gear mask.
Alternatively, described 3rd determining module includes:
Second expression model construction submodule, is configured to block mask according to described, builds described test sample data The second expression model on described training sample set;
Second expression coefficient determination sub-module, is configured to seek least square solution to the described second expression model, obtains institute State the estimated value of the second expression coefficient on described training sample set for the test sample data.
In the technique scheme that the disclosure provides, block mask using what linear expression model extraction blocked face, And block mask using this, and shielding processing is carried out to shield portions, such that it is able to reduce the impact to recognition result for the shield portions, Improve the accuracy rate blocking recognition of face.What the disclosure provided above-mentioned block face identification method to have calculating simply efficiently special Point, can meet the requirement to real-time for the recognition of face.In addition, for the recognition of face under obstruction conditions, different from traditional method, This method need not be with regard to the prior information (connection for example, blocked etc.) of occlusion area, and therefore, the scope of application is wider.Using this Open provide above-mentioned block face identification method, can simply, efficiently and accurately to blocking face be identified.
Other feature and advantage of the disclosure will be described in detail in subsequent specific embodiment part.
Brief description
Accompanying drawing is used to provide further understanding of the disclosure, and constitutes the part of description, with following tool Body embodiment is used for explaining the disclosure together, but does not constitute restriction of this disclosure.In the accompanying drawings:
Fig. 1 is a kind of flow chart blocking face identification method shown in an exemplary embodiment.
Fig. 2 a to Fig. 2 d is shown and is shown come the process that target occlusion facial image is identified using the method shown in Fig. 1 It is intended to.
Fig. 3 is a kind of block diagram blocking face identification device shown in an exemplary embodiment.
Fig. 4 is a kind of block diagram blocking face identification device shown in an exemplary embodiment.
Specific embodiment
It is described in detail below in conjunction with accompanying drawing specific embodiment of this disclosure.It should be appreciated that this place is retouched The specific embodiment stated is merely to illustrate and explains the disclosure, is not limited to the disclosure.
Fig. 1 is a kind of flow chart blocking face identification method shown in an exemplary embodiment, is applied to electronic equipment. As shown in figure 1, the method may comprise steps of.
In a step 101, obtain target occlusion facial image.In the disclosure, target occlusion facial image refers to wait to know Facial image under other obstruction conditions.Electronic equipment can obtain target occlusion facial image in several ways.For example, Target occlusion facial image can be obtained by photographic head on an electronic device by setting, or obtain mesh from this map office Mark blocks facial image, or obtains this target occlusion facial image from another electronic equipment, etc..In addition, circumstance of occlusion can For example to include but is not limited to: glasses, scarf, medicated cap, mask etc. block to face.
In a step 102, pretreatment is carried out to target occlusion facial image, obtain the sample number of target occlusion facial image According to this sample data is as test sample data.
Illustratively, the process that target occlusion facial image is carried out with pretreatment is as follows: first in target occlusion facial image On, sheared centered on eyes and registration process, and done histogram equalization, by the target occlusion face figure after equalization The data matrix of picture becomes column vector by flattening operations, is l2Norm normalized, obtains target occlusion facial image Sample data, this sample data is as test sample data y.
In step 103, determine the estimated value of the first expression coefficient on training sample set for the test sample data, its In, training sample set includes the sample data that multiple given unobstructed facial images obtain after pretreatment.
Illustratively, the process carrying out pretreatment to multiple given unobstructed facial images is as follows: given first against each Unobstructed training facial image, is sheared centered on eyes and registration process, and is done histogram equalization, each is passed through The data matrix of the given unobstructed facial image after equalization processing becomes column vector by flattening operations, is l2Norm normalizing Change is processed, and obtains corresponding sample data, these sample datas composing training sample set d, and wherein, in d, every string represents one Training sample.
When determining the estimated value of the first expression coefficient on training sample set for the test sample data, can build first First expression model on training sample set for the test sample data, wherein, first represents that model is for example as follows:
x ^ = arg min x { | | y - d x | | 2 2 + &mu; | | x | | 2 2 } - - - ( 1 )
Wherein, y represents test sample data;D represents training sample set;X represents test sample data in training sample set On first expression coefficient;Represent the estimated value of the first expression coefficient;μ is the first preset constant, for balancingWithBoth weight relationships, and μ > 0.
Next, asking least square solution (that is, to solve l on the first expression model2Norm constraint least square problem), obtain The estimated value of the first expression coefficient on training sample set for the test sample data, as follows:
x ^ = ( d t d + &mu; &centerdot; i ) - 1 d t y - - - ( 2 )
Wherein, i is unit matrix, and size is the columns of training sample set d.
At step 104, the estimated value according to training sample set, test sample data and the first expression coefficient, determines and hides Gear mask.
Illustratively, the estimated value of coefficient can be represented first according to training sample set, test sample data and first, build Reconstructive residual error vector, wherein,Residual represents reconstructive residual error vector;Y represents test sample number According to;D represents training sample set;Represent the estimated value of the first expression coefficient, i.e. the estimated value that above-mentioned equation (2) is drawn.
Next, carrying out thresholding operation to rebuilding residual vector, i.e. according to reconstructive residual error vector sum predetermined threshold value, really Surely block mask.Illustratively, can determine in the following way and block mask:
m 1 ( j ) = 1 , i f re s i d u a l ( j ) < &sigma; 0 , o t h e r w i s e - - - ( 3 )
Wherein, m1Represent and block mask;J represents m1Index value with pixel in residual;σ represents predetermined threshold value, and And, σ > 0, illustratively, σ can between [0.003,0.006] value.
Obtain blocking mask m using equation (3)1, this blocks mask m1For two-value 0/1 mask vector.Wherein, m1In 0 value table Show the shield portions of the target occlusion facial image of estimation, 1 value represents the unobstructed part of target occlusion facial image.
For reducing impact in identification for the occlusion area, next, in step 105, using blocking mask, determine test The estimated value of the second expression coefficient on training sample set for the sample data.The purpose of this step is primarily to occlusion area Shielded, obtained by blocking the estimated value affecting less second expression coefficient, this estimated value will be further used for blocking people The identification of face image.
Illustratively, the second expression on training sample set for the test sample data can be built first according to blocking mask Model, wherein, second represents that model is as follows:
&alpha; ^ = arg min &alpha; { 1 2 | | d i a g ( m 1 ) ( y - d &alpha; ) | | 2 2 + &lambda; | | &alpha; | | 2 2 } - - - ( 4 )
Wherein, m1Represent and block mask;Y represents test sample data;D represents training sample set;α represents test sample number According to the second expression coefficient on training sample set;Represent the estimated value of the second expression coefficient;λ is the second preset constant, uses In balanceWithBoth weight relationships, and λ > 0.
It is different from general expression model, the first expression model as shown by equation (1), with the expression model of mask (that is, the second expression model shown by equation (4)) introduces and blocks mask m1, thus can reduce in target occlusion facial image The impact of shield portions.In equation (4), diag (m1) operation be by vectorial m1It is converted into diagonal matrix, wherein, matrix diagonals unit Element is for being set to m1Value, its residual value be 0.
Next, asking least square solution (that is, to solve l on the second expression model2Norm constraint least square problem), obtain The estimated value of the second expression coefficient on training sample set for the test sample data, as shown below:
&alpha; ^ = ( d t m 1 t m 1 d + &lambda; &centerdot; i ) - 1 d t m 1 t m y - - - ( 5 )
Wherein, m1=diag (m1), it is m1Corresponding diagonal matrix.
In step 106, according to training sample set, block mask and the estimated value of the second expression coefficient, build identity and know Other model.
Illustratively, identification model is:
i d e n t i t y ( y ) = arg min i { | | m 1 ( y - d i &alpha; i ^ ) | | 2 / | | &alpha; i ^ | | 2 } - - - ( 6 )
Wherein, y represents test sample data;diWithRepresent for training sample set d and second and in factor alpha, correspond to class The training sample subset of other i and the estimated value of subrepresentation coefficient;m1=diag (m1), m1Represent and block mask;m1For m1Corresponding Diagonal matrix;Identity (y) represents the identity information of the personage represented by target occlusion facial image.
In step 107, by test sample data input to identification model (that is, inputting to above-mentioned equation (6)), obtain Take the identity information of the personage represented by target occlusion facial image.
Fig. 2 a to Fig. 2 d shows using said method come the process schematic that target occlusion facial image is identified.
First, target occlusion facial image is as shown in Figure 2 a.After step 102, step 103 and step 104, permissible Obtain corresponding blocking mask, as shown in Figure 2 b.After obtaining blocking mask, in order to reduce the impact of shield portions, to screening Gear is shielded, i.e. execution step 105, result as shown in Figure 2 c, obtains shielding the face after blocking.Finally, execution step 106 and step 107, face is identified, result as shown in Figure 2 d, obtains personage's represented by target occlusion facial image Identity information.
The said method that the disclosure provides is applied to test facial image presence and blocks, and the unscreened feelings of training sample Shape.Blocked using test sample, and training sample this diversity unobstructed, to detect occlusion area, and then reduction is blocked Negative effect in recognition of face.In above-mentioned the blocking in face identification method of disclosure offer, using linear expression model Extract block face block mask, and block mask using this, shielding processing carried out to shield portions, such that it is able to reduce screening Stopper divides the impact to recognition result, improves the accuracy rate blocking recognition of face.The above-mentioned of disclosure offer blocks recognition of face Method has the simply efficient feature of calculating, can meet the requirement to real-time for the recognition of face.In addition, under obstruction conditions Recognition of face, different from traditional method, this method need not with regard to the prior information (connection for example, blocked etc.) of occlusion area, Therefore, the scope of application is wider.Above-mentioned using disclosure offer blocks face identification method, can simply, efficiently and accurately It is identified to blocking face.
Fig. 3 is a kind of block diagram blocking face identification device 300 shown in an exemplary embodiment.As shown in figure 3, this dress Put 300 and may include that acquisition module 301, be configured to obtain target occlusion facial image;Pretreatment module 302, is configured to Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image, this sample number According to as test sample data;First determining module 303, is configured to determine that described test sample data on training sample set First expression coefficient estimated value, wherein, described training sample set include multiple give unobstructed facial images preprocessed The sample data obtaining afterwards;Second determining module 304, is configured to according to described training sample set, described test sample data With the estimated value of the described first expression coefficient, determine and block mask;3rd determining module 305, is configured to be blocked according to described Mask, determines the estimated value of the second expression coefficient on described training sample set for the described test sample data;Identification mould Type builds module 306, is configured to according to described training sample set, the described estimation blocking mask and described second expression coefficient Value, builds identification model;Identity information acquisition module 307, is configured to described test sample data input is extremely described Identification model, obtains the identity information of the personage represented by described target occlusion facial image.
Alternatively, described first determining module 303 may include that the first expression model construction submodule, is configured to structure Build the first expression model on described training sample set for the described test sample data;First expression coefficient determination sub-module, quilt It is configured to seek least square solution to the described first expression model, obtain described test sample data on described training sample set The estimated value of the first expression coefficient.
Alternatively, described second determining module 304 may include that reconstructive residual error vector builds submodule, is configured to structure Build reconstructive residual error vector;Block mask determination sub-module, be configured to, according to described reconstructive residual error vector sum predetermined threshold value, determine Described block mask.
Alternatively, described 3rd determining module 305 may include that the second expression model construction submodule, is configured to root Block mask according to described, build the second expression model on described training sample set for the described test sample data;Second expression Coefficient determination sub-module, is configured to seek least square solution to the described second expression model, obtains described test sample data and exist The estimated value of the second expression coefficient on described training sample set.
With regard to the device in above-described embodiment, wherein the concrete mode of modules execution operation is in relevant the method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 4 is a kind of block diagram blocking face identification device 400 shown in an exemplary embodiment, and this device 400 is permissible It is electronic equipment, such as mobile terminal, personal computer, server etc..As shown in figure 4, this device 400 may include that processor 401, memorizer 402, multimedia groupware 403, input/output (i/o) interface 404, communication component 405 and video capture assembly 406.
Wherein, processor 401 is used for controlling the integrated operation of this device 400, to complete above-mentioned to block recognition of face side All or part of step in method.Memorizer 402 is used for storing various types of data to support the operation in this device 400, The instruction that for example can include for any application program of operation or method on this device 400 of these data, Yi Jiying With the related data of program, such as contact data, the message of transmitting-receiving, picture, audio frequency, video etc..This memorizer 402 is permissible Realized by any kind of volatibility or non-volatile memory device or combinations thereof, such as static RAM (static random access memory, abbreviation sram), Electrically Erasable Read Only Memory (electrically Erasable programmable read-only memory, abbreviation eeprom), Erasable Programmable Read Only Memory EPROM (erasable programmable read-only memory, abbreviation eprom), programmable read only memory (programmable read-only memory, abbreviation prom), and read only memory (read-only memory, referred to as Rom), magnetic memory, flash memory, disk or CD.
Multimedia groupware 403 can include screen and audio-frequency assembly.Wherein screen can be for example touch screen, audio-frequency assembly For output and/or input audio signal.For example, audio-frequency assembly can include a mike, and mike is used for receiving outside Audio signal.The audio signal being received can be further stored in memorizer 402 or be sent by communication component 405.Sound Frequency assembly also includes at least one speaker, for exports audio signal.I/o interface 404 is processor 401 and other interface moulds Interface is provided, other interface modules above-mentioned can be keyboard, mouse, button etc. between block.These buttons can be virtual push button Or entity button.
Communication component 405 is used for carrying out wired or wireless communication between this device 400 and other equipment.Radio communication, example As wi-fi, bluetooth, near-field communication (near field communication, abbreviation nfc), 2g, 3g or 4g, or in them The combination of one or more, this communication component 405 therefore corresponding may include that wi-fi module, bluetooth module, nfc module.
Video capture assembly 406 may include the modules such as photographic head, signal processing, for gathering video image.
In one exemplary embodiment, device 400 can be by one or more application specific integrated circuits (application specific integrated circuit, abbreviation asic), digital signal processor (digital Signal processor, abbreviation dsp), digital signal processing appts (digital signal processing device, Abbreviation dspd), PLD (programmable logic device, abbreviation pld), field programmable gate array (field programmable gate array, abbreviation fpga), controller, microcontroller, microprocessor or other electronics unit Part is realized, and above-mentioned blocks face identification method for executing.
In a further exemplary embodiment, additionally provide a kind of non-transitory computer-readable storage medium including instruction Matter, for example, include the memorizer 402 instructing, and above-mentioned instruction can be executed by the processor 401 of device 400 to complete above-mentioned blocking Face identification method.Illustratively, this non-transitorycomputer readable storage medium can be rom, random access memory (random access memory, abbreviation ram), cd-rom, tape, floppy disk and optical data storage devices etc..
Any process described otherwise above or method description in flow chart or in embodiment of the disclosure can be by It is interpreted as, represent the code of the executable instruction including one or more steps for realizing specific logical function or process Module, fragment or part, and the scope of disclosure embodiment includes other realization, wherein can not press shown or Discuss order, including according to involved function by substantially simultaneously in the way of or in the opposite order, carry out perform function, this should Described in embodiment of the disclosure, those skilled in the art understand.
Describe the preferred implementation of the disclosure above in association with accompanying drawing in detail, but, the disclosure is not limited to above-mentioned reality Apply the detail in mode, in the range of the technology design of the disclosure, multiple letters can be carried out with technical scheme of this disclosure Monotropic type, these simple variant belong to the protection domain of the disclosure.
It is further to note that each particular technique feature described in above-mentioned specific embodiment, in not lance In the case of shield, can be combined by any suitable means.In order to avoid unnecessary repetition, the disclosure to various can The compound mode of energy no longer separately illustrates.
Additionally, combination in any can also be carried out between the various different embodiment of the disclosure, as long as it is without prejudice to this Disclosed thought, it equally should be considered as disclosure disclosure of that.

Claims (12)

1. one kind blocks face identification method it is characterised in that including:
Obtain target occlusion facial image;
Pretreatment is carried out to described target occlusion facial image, obtains the sample data of described target occlusion facial image, this sample Notebook data is as test sample data;
Determine the estimated value of the first expression coefficient on training sample set for the described test sample data, wherein, described training sample This collection includes the sample data that multiple given unobstructed facial images obtain after pretreatment;
According to the estimated value of described training sample set, described test sample data and described first expression coefficient, determine to block and cover Film;
Block mask according to described, determine estimating of the second expression coefficient on described training sample set for the described test sample data Evaluation;
According to described training sample set, the described estimated value blocking mask and described second expression coefficient, build identification mould Type;
Obtain described test sample data input to described identification model represented by described target occlusion facial image The identity information of personage.
2. method according to claim 1 is it is characterised in that described determination described test sample data is in training sample set On first expression coefficient estimated value, comprising:
Build the first expression model on described training sample set for the described test sample data;
Least square solution is asked to the described first expression model, obtains described test sample data on described training sample set the The estimated value of one expression coefficient.
3. method according to claim 2 is it is characterised in that described first expression model is:
x ^ = arg min x { | | y - d x | | 2 2 + &mu; | | x | | 2 2 }
Wherein, y represents described test sample data;D represents described training sample set;X represents described test sample data in institute State the first expression coefficient on training sample set;Represent the estimated value of described first expression coefficient;μ is the first preset constant, And μ > 0.
4. method according to claim 1 it is characterised in that described according to described training sample set, described test sample Data and the estimated value of described first expression coefficient, determine and block mask, comprising:
Build reconstructive residual error vector, wherein,Residual represents described reconstructive residual error vector;Y represents Described test sample data;D represents described training sample set;Represent the estimated value of described first expression coefficient;
According to described reconstructive residual error vector sum predetermined threshold value, described in determination, block mask.
5. method according to claim 4 it is characterised in that described according to described reconstructive residual error vector sum predetermined threshold value, Mask is blocked described in determination, comprising:
m 1 ( j ) = 1 , i f re s i d u a l ( j ) < &sigma; 0 , o t h e r w i s e
Wherein, m1Mask is blocked described in expression;J represents m1Index value with pixel in residual;σ represents described default threshold Value.
6. method according to claim 1, it is characterised in that blocking mask described in described basis, determines described test specimens The estimated value of the second expression coefficient on described training sample set for the notebook data, comprising:
Block mask according to described, build the second expression model on described training sample set for the described test sample data;
Least square solution is asked to the described second expression model, obtains described test sample data on described training sample set the The estimated value of two expression coefficients.
7. method according to claim 6 is it is characterised in that described second expression model is:
&alpha; ^ = arg min &alpha; { 1 2 | | d i a g ( m 1 ) ( y - d &alpha; ) | | 2 2 + &lambda; | | &alpha; | | 2 2 }
Wherein, m1Mask is blocked described in expression;Y represents described test sample data;D represents described training sample set;α represents institute State the second expression coefficient on described training sample set for the test sample data;Represent the estimation of described second expression coefficient Value;λ is the second preset constant, and λ > 0.
8. method according to claim 1 is it is characterised in that described identification model is:
i d e n t i t y ( y ) = argmin i { | | m 1 ( y - d i &alpha; i ^ ) | | 2 / | | &alpha; i ^ | | 2 }
Wherein, y represents described test sample data;diWithRepresent for training sample set d and second and in factor alpha, correspond to class The training sample subset of other i and the estimated value of subrepresentation coefficient;m1=diag (m1), m1Mask is blocked described in expression;m1For m1 Corresponding diagonal matrix;Identity (y) represents the identity information of the personage represented by described target occlusion facial image.
9. one kind blocks face identification device it is characterised in that including:
Acquisition module, is configured to obtain target occlusion facial image;
Pretreatment module, is configured to carry out pretreatment to described target occlusion facial image, obtains described target occlusion face The sample data of image, this sample data is as test sample data;
First determining module, is configured to determine that estimating of the first expression coefficient on training sample set for the described test sample data Evaluation, wherein, described training sample set includes the sample data that multiple given unobstructed facial images obtain after pretreatment;
Second determining module, is configured to represent system according to described training sample set, described test sample data and described first The estimated value of number, determines and blocks mask;
3rd determining module, is configured to block mask according to described, determines described test sample data in described training sample The estimated value of the second expression coefficient on collection;
Identification model construction module, is configured to according to described training sample set, described blocks mask and described second table Show the estimated value of coefficient, build identification model;
Identity information acquisition module, is configured to, by described test sample data input to described identification model, obtain institute State the identity information of the personage represented by target occlusion facial image.
10. device according to claim 9 is it is characterised in that described first determining module includes:
First expression model construction submodule, is configured to build described test sample data on described training sample set the Faithful representation module type;
First expression coefficient determination sub-module, is configured to seek least square solution to the described first expression model, obtains described survey The estimated value of the first expression coefficient on described training sample set for the sample notebook data.
11. devices according to claim 9 are it is characterised in that described second determining module includes:
Reconstructive residual error vector builds submodule, is configured to build reconstructive residual error vector, wherein, Residual represents described reconstructive residual error vector;Y represents described test sample data;D represents described training sample set;Represent The estimated value of described first expression coefficient;
Block mask determination sub-module, be configured to according to described reconstructive residual error vector sum predetermined threshold value, block described in determination and cover Film.
12. devices according to claim 9 are it is characterised in that described 3rd determining module includes:
Second expression model construction submodule, is configured to block mask according to described, builds described test sample data in institute State the second expression model on training sample set;
Second expression coefficient determination sub-module, is configured to seek least square solution to the described second expression model, obtains described survey The estimated value of the second expression coefficient on described training sample set for the sample notebook data.
CN201610782259.5A 2016-08-31 2016-08-31 Shielded face identification method and device Pending CN106372595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610782259.5A CN106372595A (en) 2016-08-31 2016-08-31 Shielded face identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610782259.5A CN106372595A (en) 2016-08-31 2016-08-31 Shielded face identification method and device

Publications (1)

Publication Number Publication Date
CN106372595A true CN106372595A (en) 2017-02-01

Family

ID=57898738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610782259.5A Pending CN106372595A (en) 2016-08-31 2016-08-31 Shielded face identification method and device

Country Status (1)

Country Link
CN (1) CN106372595A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN109522841A (en) * 2018-11-16 2019-03-26 重庆邮电大学 A kind of face identification method restored based on group's rarefaction representation and low-rank matrix
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN109840477A (en) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 Face identification method and device are blocked based on eigentransformation
CN109902720A (en) * 2019-01-25 2019-06-18 同济大学 The image classification recognition methods of depth characteristic estimation is carried out based on Subspace Decomposition
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
CN110889320A (en) * 2018-09-11 2020-03-17 苹果公司 Periocular facial recognition switching
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111814571A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Mask face recognition method and system based on background filtering
CN113468925A (en) * 2020-03-31 2021-10-01 武汉Tcl集团工业研究院有限公司 Shielded face recognition method, intelligent terminal and storage medium
CN113468931A (en) * 2020-03-31 2021-10-01 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915436A (en) * 2012-10-25 2013-02-06 北京邮电大学 Sparse representation face recognition method based on intra-class variation dictionary and training image
CN104732186A (en) * 2013-12-18 2015-06-24 南京理工大学 Single sample face recognition method based on local subspace sparse representation
CN105069402A (en) * 2015-07-17 2015-11-18 西安交通大学 Improved RSC algorithm for face identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915436A (en) * 2012-10-25 2013-02-06 北京邮电大学 Sparse representation face recognition method based on intra-class variation dictionary and training image
CN104732186A (en) * 2013-12-18 2015-06-24 南京理工大学 Single sample face recognition method based on local subspace sparse representation
CN105069402A (en) * 2015-07-17 2015-11-18 西安交通大学 Improved RSC algorithm for face identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
蔡家柱: ""基于稀疏表达的人脸识别算法研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈书杨: ""局部信息缺失情况下人脸识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN108875511B (en) * 2017-12-01 2022-06-21 北京迈格威科技有限公司 Image generation method, device, system and computer storage medium
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
CN110889320A (en) * 2018-09-11 2020-03-17 苹果公司 Periocular facial recognition switching
CN110889320B (en) * 2018-09-11 2023-11-03 苹果公司 Periocular face recognition switching
CN109522841A (en) * 2018-11-16 2019-03-26 重庆邮电大学 A kind of face identification method restored based on group's rarefaction representation and low-rank matrix
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN109711283B (en) * 2018-12-10 2022-11-15 广东工业大学 Occlusion expression recognition method combining double dictionaries and error matrix
CN109840477A (en) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 Face identification method and device are blocked based on eigentransformation
CN109902720A (en) * 2019-01-25 2019-06-18 同济大学 The image classification recognition methods of depth characteristic estimation is carried out based on Subspace Decomposition
CN109902720B (en) * 2019-01-25 2020-11-27 同济大学 Image classification and identification method for depth feature estimation based on subspace decomposition
CN113468925A (en) * 2020-03-31 2021-10-01 武汉Tcl集团工业研究院有限公司 Shielded face recognition method, intelligent terminal and storage medium
CN113468931A (en) * 2020-03-31 2021-10-01 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium
CN113468931B (en) * 2020-03-31 2022-04-29 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium
CN113468925B (en) * 2020-03-31 2024-02-20 武汉Tcl集团工业研究院有限公司 Occlusion face recognition method, intelligent terminal and storage medium
CN111639545B (en) * 2020-05-08 2023-08-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111814571A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Mask face recognition method and system based on background filtering
CN111814571B (en) * 2020-06-12 2024-07-12 深圳禾思众成科技有限公司 Mask face recognition method and system based on background filtering

Similar Documents

Publication Publication Date Title
CN106372595A (en) Shielded face identification method and device
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
CN109697416B (en) Video data processing method and related device
CN105426857B (en) Human face recognition model training method and device
CN108121984B (en) Character recognition method and device
KR101870689B1 (en) Method for providing information on scalp diagnosis based on image
CN109858375B (en) Living body face detection method, terminal and computer readable storage medium
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
CN109657716A (en) A kind of vehicle appearance damnification recognition method based on deep learning
CN107438854A (en) The system and method that the image captured using mobile device performs the user authentication based on fingerprint
CN106874826A (en) Face key point-tracking method and device
CN105999670A (en) Shadow-boxing movement judging and guiding system based on kinect and guiding method adopted by same
CN111027481B (en) Behavior analysis method and device based on human body key point detection
CN108010060A (en) Object detection method and device
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN108960145A (en) Facial image detection method, device, storage medium and electronic equipment
CN104281839A (en) Body posture identification method and device
US20190026575A1 (en) Living body detecting method and apparatus, device and storage medium
CN106372603A (en) Shielding face identification method and shielding face identification device
CN106548468A (en) The method of discrimination and device of image definition
CN109670458A (en) A kind of licence plate recognition method and device
CN111784665B (en) OCT image quality evaluation method, system and device based on Fourier transform
CN111104852B (en) Face recognition technology based on heuristic Gaussian cloud transformation
CN110991412A (en) Face recognition method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170201

WD01 Invention patent application deemed withdrawn after publication