CN107992807A - A kind of face identification method and device based on CNN models - Google Patents

A kind of face identification method and device based on CNN models Download PDF

Info

Publication number
CN107992807A
CN107992807A CN201711174490.7A CN201711174490A CN107992807A CN 107992807 A CN107992807 A CN 107992807A CN 201711174490 A CN201711174490 A CN 201711174490A CN 107992807 A CN107992807 A CN 107992807A
Authority
CN
China
Prior art keywords
image
subimage block
matching fraction
cnn
fraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711174490.7A
Other languages
Chinese (zh)
Other versions
CN107992807B (en
Inventor
程福运
郝敬松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201711174490.7A priority Critical patent/CN107992807B/en
Priority to PCT/CN2017/114140 priority patent/WO2019100436A1/en
Priority to EP17932812.5A priority patent/EP3698268A4/en
Publication of CN107992807A publication Critical patent/CN107992807A/en
Priority to US16/879,793 priority patent/US11651229B2/en
Application granted granted Critical
Publication of CN107992807B publication Critical patent/CN107992807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the invention discloses a kind of face identification method and device based on CNN models.In the program, first carry out the extraction of gray feature and TT features respectively to collection image, CNN feature extractions are carried out to gray feature image and TT characteristic images using multiple CNN models respectively again, utilize the CNN feature extractions result and the CNN feature extraction results of the registered images extracted in advance for gathering image, the matching fraction of collection image and registered images is obtained, and then carries out recognition of face.Before using multiple CNN model extractions CNN features, add the extraction of gray feature and TT features, wherein, gray feature is transformed by RGB image, contain most information of original image, increased TT features have light intensity stronger robustness, and the extraction of the TT features can weaken influence of the illumination to face identification system well, improve recognition of face effect.

Description

A kind of face identification method and device based on CNN models
Technical field
The present invention relates to depth learning technology field, more particularly to a kind of face identification method and dress based on CNN models Put.
Background technology
Deep layer convolutional neural networks (Convolutional Neural Network, CNN) are current depth learning areas One of network model, be widely used in face recognition technology.Facial image is extracted respectively by multiple CNN models Diverse location subimage block characteristics of image and the characteristics of image of extraction is merged, be that one kind effectively improves face The method of identifying system performance.Based on the image characteristic extracting method of multiple CNN models, as shown in Figure 1,4 to original image Subimage block, passes through n-layer convolutional layer (Convolution, Conv) 101, Conv1, Conv2 respectively using 4 CNN models, Conv3 ... ..., Convn carry out the extraction of feature, and the feature of 4 CNN model extractions then is passed through full articulamentum (fully Connected layers, FC) 102 carry out concatenation fusion.During training, to the feature of concatenation fusion via 103 points of softmax layers Class output prediction fraction, and the expected mark that the prediction fraction and label layer are pre-entered is contrasted to obtain error, according to Error is updated the parameter of CNN models to be restrained to expected mark;When carrying out recognition of face, with the feature of concatenation fusion Contrasted as the feature of extraction and the feature of registered images.Wherein, when the number of plies n of convolutional layer reaches certain amount, it is known as Deep layer CNN models.
But in actual recognition of face scene, image capture device is in different time (such as daytime and night), different fields Under the conditions of closing (such as indoor and outdoors), the illumination variation of the facial image of collection is maximum.Multiple CNN model extractions are based on above Feature can not eliminate the influence of illumination variation, recognition of face is ineffective.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of face identification method and device based on CNN models, for solving The problem of existing face identification method based on CNN models is influenced maximum by illumination variation.
The purpose of the embodiment of the present invention is achieved through the following technical solutions:
A kind of face identification method based on CNN models, including:
Feature extraction is carried out in accordance with the following steps to the collection image of pending recognition of face:To the image of input respectively into The extraction of row gray feature and TT features, obtains gray feature image and TT characteristic images;Chosen in gray feature image more The subimage block of a diverse location extracts multiple CNN features respectively as the input of multiple CNN models, and in TT features The multiple subimage blocks identical with multiple subimage block positions of gray feature image are chosen in image respectively as multiple CNN moulds The input of type, extracts multiple CNN features;
Acquisition in advance extracts registered images according to the step identical with the characteristic extraction step special based on gray scale Levy multiple CNN features of image and multiple CNN features based on TT images;
The CNN features of the subimage block of each position of the gray feature image zooming-out based on collection image are calculated, with base Characteristic distance between the CNN features of the subimage block of the same position of the gray feature image zooming-out of registered images, and root The first matching fraction of the subimage block to same position is determined according to this feature distance;Calculate the TT features based on collection image The CNN features of the subimage block of each position of image zooming-out, the identical bits with the TT characteristic images extraction based on registered images Characteristic distance between the CNN features for the subimage block put, and the subgraph to same position is determined according to this feature distance Second matching fraction of block;Fraction and the second matching fraction are matched according to default by the first of the subimage block of each pair of same position Strategy fusion, obtains the matching fraction of the collection image and registered images;
According to the collection image and the matching fraction of registered images, recognition of face is carried out.
It is preferred that the first matching fraction of the subimage block by each pair of same position and the second matching fraction are according to pre- If strategy fusion, the matching fraction of the collection image and registered images is obtained, including:
By one maximum in the first matching fraction and the second matching fraction of the subimage block of each pair same position As the 3rd matching fraction, alternatively, the first matching fraction of the subimage block of each pair same position and the second matching are divided Several average value as the 3rd matching fraction, alternatively, by the subimage block of each pair same position it is described first matching fraction and The second matching fraction presses default weight summation as the 3rd matching fraction;
By the 3rd matching fraction fusion of the subimage block of each pair of same position, the collection image and registration are obtained The matching fraction of image.
It is preferred that the 3rd matching fraction fusion of the subimage block by each pair of same position, including:
The average value of the 3rd matching fraction of the subimage block of each pair of same position is calculated, obtains the collection image With the matching fraction of registered images.
It is preferred that the characteristic distance for COS distance, Euclidean distance, mahalanobis distance, Hamming distance or Manhattan away from From.
A kind of face identification device based on CNN models, including:
Characteristic extracting module, for carrying out feature extraction in accordance with the following steps to the collection image of pending recognition of face: The extraction of gray feature and TT features is carried out respectively to the image of input, obtains gray feature image and TT characteristic images;In ash The subimage block for choosing multiple and different positions in degree characteristic image extracts multiple CNN respectively as the input of multiple CNN models Feature, and the multiple subimage blocks identical with multiple subimage block positions of gray feature image are chosen in TT characteristic images Respectively as the input of multiple CNN models, multiple CNN features are extracted;
Feature acquisition module, for obtain in advance according to the characteristic extraction step registered images are extracted based on ash Spend multiple CNN features of characteristic image and multiple CNN features based on TT images;
Computing module, the subimage block of each position for calculating the gray feature image zooming-out based on collection image Spy between CNN features, and the CNN features of the subimage block of the same position of the gray feature image zooming-out based on registered images Distance is levied, and the first matching fraction of the subimage block to same position is determined according to this feature distance;Calculate based on collection The CNN features of the subimage block of each position of the TT characteristic images extraction of image, with the TT characteristic images based on registered images Characteristic distance between the CNN features of the subimage block of the same position of extraction, and determine this to identical according to this feature distance Second matching fraction of the subimage block of position;Fraction and the second matching are matched by the first of the subimage block of each pair of same position Fraction is merged according to preset strategy, obtains the matching fraction of the collection image and registered images;
Identification module, for the matching fraction according to the collection image and registered images, carries out recognition of face.
It is preferred that the computing module, is specifically used for:
By one maximum in the first matching fraction and the second matching fraction of the subimage block of each pair same position As the 3rd matching fraction, alternatively, the first matching fraction of the subimage block of each pair same position and the second matching are divided Several average value as the 3rd matching fraction, alternatively, by the subimage block of each pair same position it is described first matching fraction and The second matching fraction presses default weight summation as the 3rd matching fraction;
By the 3rd matching fraction fusion of the subimage block of each pair of same position, the collection image and registration are obtained The matching fraction of image.
It is preferred that the computing module, is specifically used for:
The average value of the 3rd matching fraction of the subimage block of each pair of same position is calculated, obtains the collection image With the matching fraction of registered images.
It is preferred that the characteristic distance for COS distance, Euclidean distance, mahalanobis distance, Hamming distance or Manhattan away from From.
The embodiment of the present invention has the beneficial effect that:
In a kind of face identification method and device based on CNN models provided in an embodiment of the present invention, first to gathering image The extraction of gray feature and TT features is carried out respectively, then respectively using multiple CNN models to gray feature image and TT characteristic patterns As carrying out CNN feature extractions, the CNN feature extractions result and the CNN features of the registered images extracted in advance for gathering image are utilized Extraction is as a result, obtain the matching fraction of collection image and registered images, and then carry out recognition of face.It is more utilizing in the program Before a CNN model extractions CNN features, the extraction of gray feature and TT features is added, wherein, gray feature is by RGB image It is transformed, contains most information of original image, increased TT features has light intensity stronger robustness, and the TT is special The extraction of sign can weaken influence of the illumination to face identification system well, improve recognition of face effect.
Brief description of the drawings
Fig. 1 is the face identification method schematic diagram of multiple CNN models in the prior art;
Fig. 2 is a kind of face identification method flow chart based on CNN models provided in an embodiment of the present invention;
Fig. 3 is the effect diagram after a kind of TT feature extractions provided in an embodiment of the present invention;
Fig. 4 is a kind of face identification device schematic diagram based on CNN models provided in an embodiment of the present invention.
Embodiment
With reference to the accompanying drawings and examples to a kind of face identification method and device based on CNN models provided by the invention It is described in more detail.
As shown in Fig. 2, the embodiment of the present invention provides a kind of face identification method based on CNN models, its specific implementation side Formula is as follows:
Step 210, the collection image to pending recognition of face carry out feature extraction in accordance with the following steps:To the figure of input Extraction as carrying out gray feature and TT features respectively, obtains gray feature image and TT characteristic images;In gray feature image The middle subimage block for choosing multiple and different positions extracts multiple CNN features respectively as the input of multiple CNN models, and Chosen in TT characteristic images multiple subimage blocks identical with multiple subimage block positions of gray feature image respectively as The input of multiple CNN models, extracts multiple CNN features.
Wherein, TT features are using a kind of feature of unitary of illumination method extraction, have stronger robustness to illumination, TT features are proposed by Xiaoyang Tan and Bill Triggs, and TT is the first letter of two people's surnames.
In the step, specifically, in gray feature image, each subimage block corresponds to a CNN model and carries out feature Extraction.In TT characteristic images, each subimage block corresponds to a CNN model and carries out feature extraction.
Step 220, obtain in advance according to the step identical with characteristic extraction step registered images are extracted based on ash Spend multiple CNN features of characteristic image and multiple CNN features based on TT images.
Step 230, the CNN for the subimage block for calculating each position based on the gray feature image zooming-out for gathering image are special Sign, feature between the CNN features of the subimage block of the same position of the gray feature image zooming-out based on registered images away from From, and determine according to this feature distance the first matching fraction of the subimage block to same position;Calculate based on collection image TT characteristic images extraction each position subimage block CNN features, with based on registered images TT characteristic images extract Same position subimage block CNN features between characteristic distance, and according to this feature distance determine this to same position Subimage block second matching fraction;Fraction and the second matching fraction are matched by the first of the subimage block of each pair of same position Merged according to preset strategy, obtain the matching fraction of collection image and registered images.
Step 240, the matching fraction according to collection image and registered images, carry out recognition of face.
In the embodiment of the present invention, the extraction of gray feature and TT features is first carried out respectively to collection image, then use respectively Multiple CNN models carry out CNN feature extractions to gray feature image and TT characteristic images, are carried using the CNN features for gathering image Take result and the CNN feature extractions of registered images extracted in advance be as a result, obtain the matching fraction of collection image and registered images, And then carry out recognition of face.In the program, before using multiple CNN model extractions CNN features, gray feature and TT are added The extraction of feature, wherein, gray feature is transformed by RGB image, contains most information of original image, increased TT features have light intensity stronger robustness, and the extraction of the TT features can weaken shadow of the illumination to face identification system well Ring, improve recognition of face effect.
In addition, increased gray feature is under soft illumination, it may have preferable recognition effect.
Wherein, CNN models are deep layer CNN models.
Wherein, carry out recognition of face when, implementation have it is a variety of, can will matching fraction compared with predetermined threshold value, Matching fraction is more than predetermined threshold value, it is believed that collection image and registered images are same people.
When it is implemented, in above-mentioned steps 230, fraction and second is matched by the first of the subimage block of each pair of same position Matching fraction merged according to preset strategy, obtain collection image and registered images matching fraction, its implementation have it is a variety of, compared with Goodly, one of which implementation can be:
Using the first of the subimage block of each pair same position match maximum in fraction and the second matching fraction one as 3rd matching fraction, alternatively, matching fraction and second by the first of the subimage block of each pair same position matches being averaged for fraction Value is as the 3rd matching fraction, alternatively, matching fraction and the second matching fraction by the first of the subimage block of each pair same position By the summation of default weight as the 3rd matching fraction;
By the 3rd matching fraction fusion of the subimage block of each pair of same position, of collection image and registered images is obtained With fraction.
In the present embodiment, a matching as fusion point maximum in the first matching fraction and the second matching fraction is selected Number, matching result are more accurate.
It the above is only the amalgamation mode for listing wherein several first matching fractions and the second matching fraction, can also use Other modes, will not enumerate herein.
When it is implemented, by the 3rd matching fraction fusion of the subimage block of each pair of same position, its implementation also has It is a variety of, it is preferred that one of which implementation can be:
The average value of the 3rd matching fraction of the subimage block subimage block of each pair of same position is calculated, obtains collection image With the matching fraction of registered images.
In the present embodiment, the matching fraction of each subimage block is further merged by way of averaging, is enhanced The robustness of recognition of face, makes recognition effect more accurate.
When it is implemented, the characteristic distance between the two CNN features calculated is smaller, then the matching fraction obtained is bigger, Representative similarity is higher.
Characteristic distance therein can be COS distance, Euclidean distance, mahalanobis distance, Hamming distance, manhatton distance etc. Deng.
When determining the first matching fraction and the second matching fraction, by taking COS distance as an example, COS distance is small, illustrates two spies Angle between sign is smaller, and cosine value is bigger, thus can be directly using the cosine value of calculating as matching fraction;Again with it is European away from From exemplified by, the Euclidean distance value of calculating is smaller, illustrates that two features are more similar, and matching fraction is higher, can be according to predetermined calculation Method determines matching fraction by Euclidean distance value.
Below by taking specific apply as an example, to a kind of recognition of face side based on CNN models provided in an embodiment of the present invention Method is described below in greater detail.
In order to overcome influence of the illumination to recognition of face effect in actual recognition of face scene, in the present embodiment, first treat The collection image for carrying out recognition of face has carried out the extraction of gray feature and TT features.Then, in the gray feature of collection image Input of the subimage block of multiple and different positions as multiple CNN models is chosen in image, carries out CNN feature extractions, and in TT The multiple subimage blocks identical with multiple subimage block positions of gray level image feature are chosen in characteristic image respectively as multiple The input of CNN models, carries out CNN feature extractions.When carrying out recognition of face certification, the registered images of image and system will be gathered Contrast one by one, calculate the matching fraction that two images correspond to the feature obtained under CNN models respectively, and fraction will be matched in decision-making Layer is merged.
In the present embodiment, the collection image or registered images of either pending recognition of face, are required for by feature Extraction process, specifically can use following process to carry out feature extraction to collection image and registered images:
Step 1: to image zooming-out gray feature and TT features, gray feature image and TT characteristic images are obtained.
Wherein, TT characteristic images be based on gray level image extraction come, it is main to include 4 processing procedures, Gamma corrects, Difference of Gaussian (Difference of Gaussian, DoG) filtering, mask processing and contrast equalization, are situated between one by one below Continue:
Wherein, Gamma corrections are a kind of nonlinear photo-irradiation treatment methods, have preferable illumination adjustment effect.It is if defeated The pixel value for entering image is I, and the image intensity value after Gamma is corrected is I ', and formula is as follows:
Wherein, λ is Gamma coefficients, optionally, takes λ=0.2.
Wherein, as follows, further handled using the DoG images filtered after being corrected to Gamma.If input picture is The filtered image of I ', DoG is Id
Id=(G (x, y, σ1)-G(x,y,σ0))*I' (2)
Wherein, " * " is expressed as convolution, and x, y represent in wave filter other coordinate points to central point in x directions and y side respectively Upward distance, optionally, σ0=1, σ1=2.
Wherein, mask processing is optional the main masked out incoherent part of facial image, such as the portion such as hair style, beard Point.
Wherein, contrast equalization is carried out to image makes image normalization to a scope specified.Contrast the public affairs of equalization Formula is as follows:
Wherein, a is cake compressibility, and τ is threshold value, optionally, a=0.1, τ=10.Mean represents to take average to whole image (not including mask part).
After being handled more than, image may still include extremum, reduce the shadow of extremum using hyperbolic tangent function Ring, formula is as follows:
For original image after TT unitary of illumination, effect is as shown in Figure 3.
Step 2: the subimage block of multiple and different positions is chosen in gray feature image respectively as multiple CNN models Input, extraction obtains multiple CNN features, and the identical multiple subimage blocks of chosen position are made respectively in TT characteristic images For the input of multiple CNN models, multiple CNN features are extracted.
In the present embodiment, it is necessary to by a large amount of training set images to CNN moulds before feature extraction is carried out using CNN models Type is trained.By training set image, by scaling, simultaneously clip to identical size, then chooses multigroup subimage block difference first It is input in corresponding CNN models and is trained.Trained CNN models have very strong generalization ability, even if to not training The image crossed can also extract good feature.When carrying out feature extraction using CNN models, often with last layer of CNN models Feature of the output valve of hidden layer as image.
It is assumed that the subimage block of n diverse location is chosen in an image.Gray scale is obtained to image zooming-out gray feature After characteristic image, input of the subimage block respectively as n CNN model of n position, extraction are chosen in gray feature image Go out n CNN feature, the CNN of i-th of CNN model extraction is characterized as Gfi;After TT characteristic images being obtained to image zooming-out TT features, Chosen in TT characteristic images n subimage block of n and n subimage block same position of gray feature image respectively as The input of n CNN model, extracts n CNN feature, and the CNN of i-th of CNN model extraction is characterized as Tfi.A final image 2n CNN feature can be obtained, i.e., the n CNN features Gf based on gray feature image1, Gf2... ..., GfnWith based on TT characteristic patterns N CNN features Tf of picture1, Tf2... ..., Tfn
In implementation process, after getting CNN features according to above procedure to collection image and registered images, perform such as Lower step:
Step 1: the CNN for calculating the subimage block of each position of the gray feature image zooming-out based on collection image is special Cosine between the CNN features of the subimage block of sign and the same position of gray feature image zooming-out based on registered images away from From the COS distance to be determined as to the first matching fraction of the subimage block to same position, is calculated based on collection image The CNN features of the subimage block of each position of TT characteristic images extraction and the phase of the TT characteristic images extraction based on registered images With the COS distance between the CNN features of the subimage block of position, which is determined as the subgraph to same position It is specific as follows as the second matching fraction of block:
Gray feature image based on registered images and collection image, is extracted in the subimage block of i-th pair same position Feature is respectively Gfi、Gfi', calculate their COS distance as two images i-th of same position subimage block One matching fraction GSi, formula is as follows:
TT characteristic images based on registered images and collection image, in the spy that the subimage block of i-th pair same position extracts Sign is respectively Tfi、Tfi', calculate their COS distance as two images i-th pair same position subimage block second Match fraction TSi, formula is as follows:
Step 2: the first of the subimage block of each pair same position is matched maximum one in fraction and the second matching fraction It is a to be used as the 3rd to match fraction, it is specific as follows:
When the first matching fraction and the second matching fraction to the subimage block of i-th pair same position merge, choose Maximum matching fraction is as the 3rd matching fraction FS to subimage block in first matching fraction and the second matching fractioni
FSi=max (GSi,TSi) (9)
Step 3: calculating the average value of the 3rd matching fraction of the subimage block of each pair of same position, collection image is obtained It is specific as follows with the matching fraction of registered images:
N is calculated according to the following formula to the average value of the 3rd matching fraction of the subimage block of same position as final collection Image and registered images two open the matching fraction s of facial image:
In the present embodiment, it can be carried out judging that collection image and registered images are based on matching fraction S derived above No is same people;If matching fraction is more than predetermined threshold value, then it is assumed that collection image and registered images are same people, are not otherwise Same people.This method can weaken influence of the illumination to face identification system well due to having merged TT features, strengthen face Robustness and reliability of the identifying system under unrestricted illumination condition.
Based on same inventive concept, as shown in figure 4, the embodiment of the present invention also provides a kind of face based on CNN models Identification device, including:
Characteristic extracting module 401, carries out feature for the collection image to pending recognition of face and carries in accordance with the following steps Take:The extraction of gray feature and TT features is carried out respectively to the image of input, obtains gray feature image and TT characteristic images; The subimage block of multiple and different positions is chosen in gray feature image respectively as the input of multiple CNN models, is extracted multiple CNN features, and the multiple subgraphs identical with multiple subimage block positions of gray feature image are chosen in TT characteristic images As block is respectively as the input of multiple CNN models, multiple CNN features are extracted;
Feature acquisition module 402, for obtain in advance according to characteristic extraction step registered images are extracted based on ash Spend multiple CNN features of characteristic image and multiple CNN features based on TT images;
Computing module 403, the subgraph of each position for calculating the gray feature image zooming-out based on collection image Between the CNN features of block, and the CNN features of the subimage block of the same position of the gray feature image zooming-out based on registered images Characteristic distance, and determine according to this feature distance the first matching fraction of the subimage block to same position;Calculating is based on The CNN features of the subimage block of each position of the TT characteristic images extraction of image are gathered, with the TT features based on registered images Characteristic distance between the CNN features of the subimage block of the same position of image zooming-out, and determine that this is right according to this feature distance Second matching fraction of the subimage block of same position;Fraction and second is matched by the first of the subimage block of each pair of same position Matching fraction is merged according to preset strategy, obtains the matching fraction of collection image and registered images;
Identification module 404, for the matching fraction according to collection image and registered images, carries out recognition of face.
It is preferred that computing module, is specifically used for:
Using the first of the subimage block of each pair same position match maximum in fraction and the second matching fraction one as 3rd matching fraction, alternatively, matching fraction and second by the first of the subimage block of each pair same position matches being averaged for fraction Value is as the 3rd matching fraction, alternatively, matching fraction and the second matching fraction by the first of the subimage block of each pair same position By the summation of default weight as the 3rd matching fraction;
By the 3rd matching fraction fusion of the subimage block of each pair of same position, of collection image and registered images is obtained With fraction.
It is preferred that computing module, is specifically used for:
The average value of the 3rd matching fraction of the subimage block of each pair of same position is calculated, obtains collection image and registration figure The matching fraction of picture.
It is preferred that characteristic distance is COS distance, Euclidean distance, mahalanobis distance, Hamming distance or manhatton distance.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program Product.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or more The computer program production that usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or square frame in journey and/or square frame and flowchart and/or the block diagram.These computer programs can be provided The processors of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used in fact The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a square frame or multiple square frames.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent Select embodiment and fall into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art God and scope.In this way, if these modifications and changes of the present invention belongs to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising including these modification and variations.

Claims (8)

  1. A kind of 1. face identification method based on CNN models, it is characterised in that including:
    Feature extraction is carried out in accordance with the following steps to the collection image of pending recognition of face:Ash is carried out respectively to the image of input The extraction of feature and TT features is spent, obtains gray feature image and TT characteristic images;Chosen in gray feature image it is multiple not Subimage block with position extracts multiple CNN features respectively as the input of multiple CNN models, and in TT characteristic images It is middle to choose the multiple subimage blocks identical with multiple subimage block positions of gray feature image respectively as multiple CNN models Input, extracts multiple CNN features;
    Obtain in advance according to the step identical with the characteristic extraction step registered images are extracted based on gray feature figure Multiple CNN features of picture and multiple CNN features based on TT images;
    The CNN features of the subimage block of each position of the gray feature image zooming-out based on collection image are calculated, and based on note Characteristic distance between the CNN features of the subimage block of the same position of the gray feature image zooming-out of volume image, and according to this Characteristic distance determines the first matching fraction of the subimage block to same position;Calculate the TT characteristic images based on collection image The CNN features of the subimage block of each position of extraction, with the same position of the TT characteristic images extraction based on registered images Characteristic distance between the CNN features of subimage block, and the subimage block to same position is determined according to this feature distance Second matching fraction;Fraction and the second matching fraction are matched according to preset strategy by the first of the subimage block of each pair of same position Fusion, obtains the matching fraction of the collection image and registered images;
    According to the collection image and the matching fraction of registered images, recognition of face is carried out.
  2. 2. according to the method described in claim 1, it is characterized in that, first of the subimage block by each pair of same position Merged with fraction and the second matching fraction according to preset strategy, obtain the matching fraction of the collection image and registered images, bag Include:
    Using maximum one in the first matching fraction and the second matching fraction of the subimage block of each pair same position as 3rd matching fraction, alternatively, the first matching fraction of the subimage block of each pair same position and second are matched fraction Average value is as the 3rd matching fraction, alternatively, the first matching fraction and described by the subimage block of each pair same position Second matching fraction presses default weight summation as the 3rd matching fraction;
    By the 3rd matching fraction fusion of the subimage block of each pair of same position, the collection image and registered images are obtained Matching fraction.
  3. 3. according to the method described in claim 2, it is characterized in that, described the of the subimage block by each pair of same position Three matching fraction fusions, including:
    The average value of the 3rd matching fraction of the subimage block of each pair of same position is calculated, obtains the collection image and note The matching fraction of volume image.
  4. 4. according to the method described in claim 1, it is characterized in that, the characteristic distance is COS distance, Euclidean distance, geneva Distance, Hamming distance or manhatton distance.
  5. A kind of 5. face identification device based on CNN models, it is characterised in that including:
    Characteristic extracting module, for carrying out feature extraction in accordance with the following steps to the collection image of pending recognition of face:To defeated The image entered carries out the extraction of gray feature and TT features respectively, obtains gray feature image and TT characteristic images;It is special in gray scale Input of the subimage block respectively as multiple CNN models of multiple and different positions is chosen in sign image, it is special to extract multiple CNN Sign, and the multiple subimage blocks identical with multiple subimage block positions of gray feature image point are chosen in TT characteristic images Input not as multiple CNN models, extracts multiple CNN features;
    Feature acquisition module, for obtain registered images are extracted according to the characteristic extraction step in advance it is special based on gray scale Levy multiple CNN features of image and multiple CNN features based on TT images;
    Computing module, the CNN of the subimage block of each position for calculating the gray feature image zooming-out based on collection image Feature between feature, and the CNN features of the subimage block of the same position of the gray feature image zooming-out based on registered images Distance, and determine according to this feature distance the first matching fraction of the subimage block to same position;Calculate based on collection figure The CNN features of the subimage block of each position of the TT characteristic images extraction of picture, carry with the TT characteristic images based on registered images Characteristic distance between the CNN features of the subimage block of the same position taken, and determine this to identical bits according to this feature distance Second matching fraction of the subimage block put;Fraction and the second matching point are matched by the first of the subimage block of each pair of same position Number is merged according to preset strategy, obtains the matching fraction of the collection image and registered images;
    Identification module, for the matching fraction according to the collection image and registered images, carries out recognition of face.
  6. 6. device according to claim 5, it is characterised in that the computing module, is specifically used for:
    Using maximum one in the first matching fraction and the second matching fraction of the subimage block of each pair same position as 3rd matching fraction, alternatively, the first matching fraction of the subimage block of each pair same position and second are matched fraction Average value is as the 3rd matching fraction, alternatively, the first matching fraction and described by the subimage block of each pair same position Second matching fraction presses default weight summation as the 3rd matching fraction;
    By the 3rd matching fraction fusion of the subimage block of each pair of same position, the collection image and registered images are obtained Matching fraction.
  7. 7. device according to claim 6, it is characterised in that the computing module, is specifically used for:
    The average value of the 3rd matching fraction of the subimage block of each pair of same position is calculated, obtains the collection image and note The matching fraction of volume image.
  8. 8. device according to claim 5, it is characterised in that the characteristic distance is COS distance, Euclidean distance, geneva Distance, Hamming distance or manhatton distance.
CN201711174490.7A 2017-11-22 2017-11-22 Face recognition method and device based on CNN model Active CN107992807B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201711174490.7A CN107992807B (en) 2017-11-22 2017-11-22 Face recognition method and device based on CNN model
PCT/CN2017/114140 WO2019100436A1 (en) 2017-11-22 2017-11-30 Methods and systems for face recognition
EP17932812.5A EP3698268A4 (en) 2017-11-22 2017-11-30 Methods and systems for face recognition
US16/879,793 US11651229B2 (en) 2017-11-22 2020-05-21 Methods and systems for face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711174490.7A CN107992807B (en) 2017-11-22 2017-11-22 Face recognition method and device based on CNN model

Publications (2)

Publication Number Publication Date
CN107992807A true CN107992807A (en) 2018-05-04
CN107992807B CN107992807B (en) 2020-10-30

Family

ID=62031987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711174490.7A Active CN107992807B (en) 2017-11-22 2017-11-22 Face recognition method and device based on CNN model

Country Status (1)

Country Link
CN (1) CN107992807B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710920A (en) * 2018-06-05 2018-10-26 北京中油瑞飞信息技术有限责任公司 Indicator card recognition methods and device
CN109325448A (en) * 2018-09-21 2019-02-12 广州广电卓识智能科技有限公司 Face identification method, device and computer equipment
CN110334688A (en) * 2019-07-16 2019-10-15 重庆紫光华山智安科技有限公司 Image-recognizing method, device and computer readable storage medium based on human face photo library
CN110443128A (en) * 2019-06-28 2019-11-12 广州中国科学院先进技术研究所 One kind being based on SURF characteristic point accurately matched finger vein identification method
CN110458134A (en) * 2019-08-17 2019-11-15 裴露露 A kind of face identification method and device
CN110969189A (en) * 2019-11-06 2020-04-07 杭州宇泛智能科技有限公司 Face detection method and device and electronic equipment
CN114519378A (en) * 2021-12-24 2022-05-20 浙江大华技术股份有限公司 Training method of feature extraction unit, face recognition method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
CN105138993A (en) * 2015-08-31 2015-12-09 小米科技有限责任公司 Method and device for building face recognition model
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN107239583A (en) * 2017-08-02 2017-10-10 广东工业大学 A kind of face retrieval method and device based on L1 norm neutral nets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN105138993A (en) * 2015-08-31 2015-12-09 小米科技有限责任公司 Method and device for building face recognition model
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN107239583A (en) * 2017-08-02 2017-10-10 广东工业大学 A kind of face retrieval method and device based on L1 norm neutral nets

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710920A (en) * 2018-06-05 2018-10-26 北京中油瑞飞信息技术有限责任公司 Indicator card recognition methods and device
CN108710920B (en) * 2018-06-05 2021-05-14 北京中油瑞飞信息技术有限责任公司 Indicator diagram identification method and device
CN109325448A (en) * 2018-09-21 2019-02-12 广州广电卓识智能科技有限公司 Face identification method, device and computer equipment
CN110443128A (en) * 2019-06-28 2019-11-12 广州中国科学院先进技术研究所 One kind being based on SURF characteristic point accurately matched finger vein identification method
CN110443128B (en) * 2019-06-28 2022-12-27 广州中国科学院先进技术研究所 Finger vein identification method based on SURF feature point accurate matching
CN110334688A (en) * 2019-07-16 2019-10-15 重庆紫光华山智安科技有限公司 Image-recognizing method, device and computer readable storage medium based on human face photo library
CN110334688B (en) * 2019-07-16 2021-09-07 重庆紫光华山智安科技有限公司 Image recognition method and device based on face photo library and computer readable storage medium
CN110458134A (en) * 2019-08-17 2019-11-15 裴露露 A kind of face identification method and device
CN110969189A (en) * 2019-11-06 2020-04-07 杭州宇泛智能科技有限公司 Face detection method and device and electronic equipment
CN110969189B (en) * 2019-11-06 2023-07-25 杭州宇泛智能科技有限公司 Face detection method and device and electronic equipment
CN114519378A (en) * 2021-12-24 2022-05-20 浙江大华技术股份有限公司 Training method of feature extraction unit, face recognition method and device

Also Published As

Publication number Publication date
CN107992807B (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN107992807A (en) A kind of face identification method and device based on CNN models
CN109492643A (en) Certificate recognition methods, device, computer equipment and storage medium based on OCR
CN109685013B (en) Method and device for detecting head key points in human body posture recognition
CN107967456A (en) A kind of multiple neural network cascade identification face method based on face key point
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
CN104143079A (en) Method and system for face attribute recognition
CN108009481A (en) A kind of training method and device of CNN models, face identification method and device
Wang et al. Learning deep conditional neural network for image segmentation
CN105335719A (en) Living body detection method and device
CN108537143B (en) A kind of face identification method and system based on key area aspect ratio pair
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN106485186A (en) Image characteristic extracting method, device, terminal device and system
CN108564120A (en) Feature Points Extraction based on deep neural network
CN107633229A (en) Method for detecting human face and device based on convolutional neural networks
CN107292346B (en) A kind of MR image hippocampus partitioning algorithm based on Local Subspace study
CN109635653A (en) A kind of plants identification method
CN109359527A (en) Hair zones extracting method and system neural network based
CN104794693A (en) Human image optimization method capable of automatically detecting mask in human face key areas
CN110415212A (en) Abnormal cell detection method, device and computer readable storage medium
CN109451634A (en) Method and its intelligent electric lamp system based on gesture control electric light
CN109543656A (en) A kind of face feature extraction method based on DCS-LDP
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN107066955A (en) A kind of method that whole face is reduced from local facial region
CN112489129A (en) Pose recognition model training method and device, pose recognition method and terminal equipment
CN111209873A (en) High-precision face key point positioning method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant