CN107609459B - A kind of face identification method and device based on deep learning - Google Patents

A kind of face identification method and device based on deep learning Download PDF

Info

Publication number
CN107609459B
CN107609459B CN201611158851.4A CN201611158851A CN107609459B CN 107609459 B CN107609459 B CN 107609459B CN 201611158851 A CN201611158851 A CN 201611158851A CN 107609459 B CN107609459 B CN 107609459B
Authority
CN
China
Prior art keywords
face
images
image
neural network
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611158851.4A
Other languages
Chinese (zh)
Other versions
CN107609459A (en
Inventor
王健宗
刘铭
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201611158851.4A priority Critical patent/CN107609459B/en
Publication of CN107609459A publication Critical patent/CN107609459A/en
Application granted granted Critical
Publication of CN107609459B publication Critical patent/CN107609459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention is suitable for technical field of face recognition, provides face identification method and device based on deep learning, including:Build the deep neural network based on face training image;Obtain images to be recognized;It detects the human face region in images to be recognized and is extracted;After converting human face region image to standard front face facial image, it is input to deep neural network;Utilize deep neural network, the expression vector of outputting standard front face image;Expression vector is compared with each of face database face Expressive Features, to obtain the face identity of images to be recognized.In the present invention, due to the use of multiple face training images deep neural network is established as supervision message, and the character features of every image are all based on deep neural network to extract, therefore it can learn and use the stronger character features of robustness, compared to traditional face identification method, recognition of face effect is more preferable, under complicated environmental condition, can possess stronger anti-interference ability.

Description

A kind of face identification method and device based on deep learning
Technical field
The invention belongs to technical field of face recognition more particularly to a kind of face identification methods and dress based on deep learning It sets.
Background technology
Quick with video monitoring is popularized, and the application of numerous video monitorings can be used for remote, use there is an urgent need to a kind of Quick identity recognizing technology under the non-mated condition in family quickly confirms personnel identity in the hope of remote, realizes intelligent early-warning.Cause This, the face recognition technology continued to develop has played main effect in this process.Face recognition technology is to be based on people Face feature, the facial image or video flowing of input are handled, to identify the technology of the identity of each face.Mainly Face identification method include the following steps:The identity characteristic that each face is contained in extraction image, by itself and known people Face carries out matching comparison, to achieve the effect that identify the identity of each face.
Currently, it is mainly to be calculated by the feature extraction based on hand-designed to extract the identity characteristic contained in each face Method is being realized.And in actual complex environment, human face data often there is the influence of various factors, such as illumination, block, Attitudes vibration etc., in this case, the existing face identification method based on hand-designed feature extraction algorithm have poor Robustness, it is poor to the anti-interference ability of above-mentioned influence factor, and these uncontrollable factors make based on existing method Recognition of face performance drastically declines, it is difficult to which the effect for ensureing recognition of face has that face recognition accuracy rate is low.
Invention content
In view of this, an embodiment of the present invention provides a kind of face identification method and system based on deep learning, with solution Certainly the prior art in human face data complex environment factor poor anti jamming capability and have the problem of relatively low robustness.
In a first aspect, a kind of face identification method based on deep learning is provided, including:
It builds and trains the deep neural network based on face training image;
Obtain images to be recognized;
It detects the position of human face region in the images to be recognized and extracts the human face region;
After converting the human face region image to standard front face facial image, it is input to the deep neural network;
Using the deep neural network, the expression vector of the standard front face facial image, the expression vector are exported Describe the face characteristic of the images to be recognized;
Expression vector is compared with each of face database face Expressive Features, to obtain the figure to be identified The face identity of picture.
Second aspect provides a kind of face identification device based on deep learning, including:
Training unit, for building and training the deep neural network based on face training image;
Acquiring unit, for obtaining images to be recognized;
Detection unit, for detecting the position of human face region in the images to be recognized and extracting the human face region Come;
Conversion unit is input to the depth after converting the human face region image to standard front face facial image Spend neural network;
Output unit, for utilizing the deep neural network, the expression for exporting the standard front face facial image is vectorial, The expression vector description face characteristic of the images to be recognized;
Recognition unit, for expression vector to be compared with each of face database face Expressive Features, to obtain Take the face identity of the images to be recognized.
In embodiments of the present invention, by the way that images to be recognized is adjusted to standard front face image, then with known piece identity Image compared one by one, the accuracy of recognition of face can be increased.Due to the use of multiple face training images as prison The source of information is superintended and directed to establish deep neural network, and the character features of every image are all based on deep neural network to carry It takes, therefore can learn and use the stronger character features of robustness, compared to traditional face identification method, recognition of face Effect it is more preferable, under complicated environmental condition, stronger anti-interference ability can be possessed.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description be only the present invention some Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is the implementation flow chart of the face identification method provided in an embodiment of the present invention based on deep learning;
Fig. 2 is the specific implementation flow of the face identification method S103 provided in an embodiment of the present invention based on deep learning Figure;
Fig. 3 is the specific implementation flow of the face identification method S104 provided in an embodiment of the present invention based on deep learning Figure;
Fig. 4 is the specific implementation flow of the face identification method S105 provided in an embodiment of the present invention based on deep learning Figure;
Fig. 5 is the specific implementation flow of the face identification method S101 provided in an embodiment of the present invention based on deep learning Figure;
Fig. 6 is the structure diagram of the face identification device provided in an embodiment of the present invention based on deep learning.
Specific implementation mode
In being described below, for illustration and not for limitation, it is proposed that such as tool of particular system structure, technology etc Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention can also be realized in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, in case unnecessary details interferes description of the invention.
The embodiment of the present invention is realized based on deep neural network, and figure is trained by personage to the training of neural network model The character features of picture are estimated and are optimized and revised to the parameter of model, with identical neural network model different to handle Image, images to be recognized pass sequentially through deep neural networks at different levels, after the feature representation vector for obtaining image, will expression vector with The face Expressive Features recorded in multiple portrait libraries are compared, and the character image for not meeting matching condition is eliminated, and finally, are connect By meeting result of the corresponding piece identity of character image of matching condition as recognition of face.
In order to illustrate technical solutions according to the invention, illustrated below by specific embodiment.
Fig. 1 shows the implementation process of the face identification method provided in an embodiment of the present invention based on deep learning, is described in detail It is as follows:
In S101, builds and train the deep neural network based on face training image.
The face training image is including but not limited under different facial orientations, different shelters and different illumination conditions Multiple facial images.
In the present embodiment, by collect it is various under the conditions of face training image or the enough face of input quantity instruct Practice image to establish deep neural network model, which is the markd image of tool of known piece identity's information Sample is used for the parameter of percentage regulation neural network model, so that the model is based on supervised learning, reaches in practical applications Required recognition performance.
In S102, images to be recognized is obtained.
Images to be recognized can be a secondary or more secondary facial images, it might even be possible to be the image for intercepting out from video flowing Picture or the face picture for being spliced according to objective description, being drawn out.The images to be recognized pre-enters and is stored in face In the system of identification device.
In the present embodiment, OpenCV (Open Source Computer Vision Library, calculating of increasing income are utilized Machine vision library) read the images to be recognized stored in the system.
In S103, detects the position of human face region in the images to be recognized and extract the human face region.
In images to be recognized, due to there are the interference of various animals, article or other background elements, needing first to image In human face region be detected, be confirmed whether that there are human face targets to be detected, and there will be the faces in images to be recognized It records target location.
As an embodiment of the present invention, Fig. 2 shows the faces provided in an embodiment of the present invention based on deep learning The specific implementation flow of recognition methods S103, details are as follows:
In S201, the images to be recognized is pre-processed.
In the present embodiment, carrying out pretreatment to the images to be recognized may include:Gray scale is carried out to images to be recognized Change processing or carries out Gaussian Blur processing.If select Gaussian Blur, should additional image Edge contrast, it is to be identified with protrusion Boundary striped details in image enables deep neural network model therefrom to extract with more deterministic person recognition spy Sign.
In the present embodiment, pretreatment mode is preferably gray processing processing, which can pass through histogram, grey scale change Or the mode of orthogonal transformation is realized.
In S202, by pre-loaded Haar Face datection models, to the pretreated images to be recognized into The positioning of row human face region.
Haar (Haar-like) Face datection models are by calculating the Haar features in images to be recognized, with true Recognize and whether there is face in present image, the process is based on the HaarCascades target detections frame in OpenCV come automatic complete At.
Currently, Haar features are divided into four classes:Edge feature, linear character, central feature and diagonal line feature.Four class Haar Feature is combined into feature templates.There are white and two kinds of rectangles of black in feature templates, the Haar characteristic values of the template are white Rectangular pixels and subtract black rectangle pixel and.Haar characteristic values reflect the grey scale change situation of images to be recognized, by with Haar characteristic values come quantify face characteristic, it can be achieved that face and non-face region differentiation.
Since Haar feature templates child window is constantly shifted and is slided in picture to be identified, child window Every position, will be by calculating the Haar features in the region.By using advance trained cascade classifier pair The Haar features are screened, once this feature has passed through the screening of all graders, then judge the region for face.
By above-mentioned detection, the human face region in images to be recognized can be oriented.
Further, it is also possible to by using the face markers detector (face landmark detector) based on OpenCV Face datection is carried out to images to be recognized, detector training from multiple facial images for marked in advance face's key point obtains .
In S203, according to the positioning of the human face region, the human face region is extracted in the images to be recognized Out.
After detecting human face region, the location information in images to be recognized at human face region is recorded, such as The coordinate in the human face region upper left corner and the width of human face region and height etc..It is big according to the region terminal of record or region Small information can individually extract human face region from images to be recognized.
In the present embodiment, since current coloured image all uses RGB color pattern substantially, and RGB can not be really anti- The morphological feature of image is reflected, therefore in order to reduce the time of extraction image Haar features, improves the efficiency of image procossing, will wait knowing Other image is converted to 8 gray-value images by pretreatments such as gray processings.It is calculated by the Haar features to image, It is capable of detecting when human face region position, the non-face region comprising disturbing factor is eliminated, subsequent image processing can be improved Speed and accuracy.
In S104, after converting the human face region image to standard front face facial image, it is input to the depth god Through network.
Since in most cases, the facial orientation in image can have certain angle tilt, towards too inclined face Image can improve the identification difficulty of subsequent algorithm, therefore, in the present embodiment, by the human face region image in images to be recognized into Row correction, to facilitate following model to handle it.
As an embodiment of the present invention, Fig. 3 shows the face provided in an embodiment of the present invention based on deep learning The specific implementation flow of recognition methods S104, details are as follows:
In S301, the key point position in the human face region image is marked.
Key point in human face region is detected, the position mark for meeting crucial point feature is come out.The key Point includes but not limited to left pupil, right pupil, left eyebrow, right eyebrow, left-hand side nose, on the downside of nostril, on the downside of upper lip, the corners of the mouth and The specific human face point such as cheek.
In S302, pass through key point position described in affine transformation function calibration, output calibration facial image.
For each key point position, the function of affine transformation be realize key point two-dimensional coordinate to two-dimensional coordinate it Between linear transformation, and keep images to be recognized " grazing " and " collimation ".Pass through the compound of a series of Atom Transformation It realizes, including but not limited to translates, scale, overturning, the operations such as rotation and shearing.
Become for new coordinate (x', y'), each key point through affine transformation function treated key point original coordinate (x, y) The set of new coordinate, realizes the calibration face figure that the inclination face in human face region is converted to face face picture viewer Picture.
In S303, the calibration facial image is zoomed into pre-set dimension, to obtain the standard front face facial image.
In the present embodiment, the calibration facial image is amplified to pre-set dimension, can retained in images to be recognized More facial detail features, so that recognition of face effect is more accurate;The calibration facial image is contracted to default ruler It is very little, the speed of recognition of face can be accelerated, reduce the operand in processing procedure.Therefore, it is obtained according to actual demand and presets ruler Very little, the calibration facial image for meeting pre-set dimension is the standard front face facial image.
Find that pre-set dimension is preferably 96x96 by the experiment of inventor.The size can be complete and clear by face Ground is shown, and can preferably balance the speed and precision of image subsequent processing.
In S105, using the deep neural network, the expression vector of the standard front face facial image is exported, it is described Express the vector description face characteristic of the images to be recognized.
The deep neural network contains multiple layers, the effects of different layers difference.In the present embodiment, which is Deep neural network based on GoogLeNet, network structure are as shown in table 1:
Table 1
In table 1, layer row indicate the title of each layer in deep neural network, wherein Conv indicates convolutional layer, Pool indicates pond layer, and Rnorm1 indicates that regularization layer, Fc1 indicate full articulamentum, per the layer name subsequent digital representation layer Serial number, for example, Conv1 indicate first layer convolutional layer;Layer-in indicates the image input of corresponding each neural net layer Dimension, such as " 220_220_3 " indicate that the image width of input is 220, a height of 220, and the channel number of input is 3;Layer-out is indicated The characteristics of image of corresponding each neural net layer exports dimension;Kernel indicates the mistake used in corresponding each neural net layer Filter, such as " 7_7_3,2 " indicate that the width of filter is 7, a height of 7, and the channel number of input indicates the filter in input for 3,2 The step-length slided every time in image;L2 indicates to carry out second normal form stipulations to the network layer weights of connection, to prevent nerve net The over-fitting of network model;FLPS indicates that performed flops per second, such as " 115M " indicate 115,000,000 floating-points per second Operation.Wherein, channel number indicates the quantity of image.
It is understood that in the network structure of practical application, the hierarchical structure of other quantity can be included.
In the present embodiment, by the way that standard front face facial image is inputted the deep neural network, by depth network The last one of model is fully connected layer, can export the face Expressive Features of the images to be recognized, and by express vector come It indicates.Since the network layer there are 128 neurons, 128 outputs can be generated, therefore the face Expressive Features are one The vector of 128 dimensions.
After the expression vector obtains, L2 normalization is carried out to it, i.e., with each element in vector divided by the L2 models of vector Number.
Vectorial element value fluctuation range becomes relatively stable after normalization, will not be smaller or larger because of certain element values And the training of neural network model is interfered.
In S106, expression vector is compared with each of face database face Expressive Features, to obtain State the face identity of images to be recognized.
It, can be by multiple faces in face database since expression vector indicates the face Expressive Features of images to be recognized The face Expressive Features of image are compared with the expression vector, and the face database image for meeting comparison condition is images to be recognized Face recognition result.According to feature comparison rules, additionally it is possible to filter out and the highest multiple candidates of images to be recognized matching degree Piece identity.
As an alternative embodiment of the invention, Fig. 4 shows the people provided in an embodiment of the present invention based on deep learning The specific implementation flow of face recognition method S105, details are as follows:
S401, each face contrast images in obtaining face database.
S402 detects the position of human face region in the face contrast images for each face contrast images, And the human face region is extracted as the second image.
S403 after described every the second image is separately converted to standard front face facial image, is input to the depth god Through network.
S404 extracts every people according to every standard front face facial image using the deep neural network The face Expressive Features of face contrast images.
It is similarly suitable in the S401 to S404 of the present embodiment for the content described in above-described embodiment S102 to S105 With being multiple face contrast images difference lies in the original image handled in the present embodiment, handle in S102 to S105 original Image is images to be recognized, remaining realization principle all same does not repeat one by one herein.
S405 calculates separately the spy of the expression vector and face Expressive Features described in every face contrast images Levy distance.
Face Expressive Features in every face contrast images can also use a feature vector to indicate, this feature The vectorial characteristic distance with the expression vector of the images to be recognized is found out in the following manner:
Two vectors are subtracted each other and find out difference value vector;
The quadratic sum for calculating each element value in the difference value vector exports as the feature of feature vector and expression vector Distance.
Wherein, each element corresponds to a dimensional characteristics value in 128-D vectors.
S406 obtains the face contrast images that the characteristic distance is less than predetermined threshold value, the face contrast images Corresponding face identity output is the face identity of the images to be recognized.
Above-mentioned each characteristic distance belongs to two classification problems with the process that predetermined threshold value is compared, that is, judges special Sign is to match the images to be recognized or mismatch the images to be recognized apart from corresponding face contrast images.
In the present embodiment, matching condition of the setting predetermined threshold value as recognition of face, when characteristic distance is less than or equal to When predetermined threshold value, the corresponding face contrast images of the characteristic distance are considered as matching with images to be recognized, the figure to be identified The identity of face is then confirmed that as in, for the face identity registered in the face contrast images.
As an alternative embodiment of the invention, above-mentioned predetermined threshold value is preferably 1.05, and being one has face prejudgementing character Threshold value.
In embodiments of the present invention, by the way that images to be recognized is adjusted to standard front face image, then with known piece identity Image compared one by one, the accuracy of recognition of face can be increased.Due to the use of multiple face training images as prison The source of information is superintended and directed to establish deep neural network, and the character features of every image are all based on deep neural network to carry It takes, therefore can learn and use the stronger character features of robustness, compared to traditional face identification method, recognition of face Effect it is more preferable, under complicated environmental condition, stronger anti-interference ability can be possessed.
As an alternative embodiment of the invention, the training of deep neural network structure can be used under stochastic gradient The optimization method of drop.Wherein, the potential energy item of model is set as 0.9, and learning rate is fixed as 0.01, every 6 frequency of training (epochs) 25% is reduced, classification task loses (Cross-entity Loss) function using entity is intersected.
Specifically, as shown in Figure 5:
In S501, deep neural network described in the pre-training model initialization increased income is utilized.
In the present embodiment, using the starting model shape of the Caffe pre-training model initialization deep neural networks increased income State.
In S502, multiple a plurality of types of face training images are inputted into the deep neural network.
Using multiple face training images as the human face data of training sample, the different types of image includes appearance State, illumination are blocked, the more influence factor such as people.
In S503, according to the feature of every face training image, learnt using asynchronous stochastic gradient descent algorithm The feature extraction parameter of the deep neural network.
In S504, according to the feature extraction parameter, the depth nerve net is calculated using entity loss function is intersected The feature extraction Effect value of network.
In S505, feature extraction parameter described in the deep neural network iterative learning is enabled, until the feature extraction Effect value, which meets, presets optimization aim.
In order to enable the characteristic distance for belonging to the face training image of same identity people is as small as possible, belong to different identity people Face training image characteristic distance it is as big as possible, in the present embodiment, a ternary is constituted with three face training images Group is expressed as (Anchor, Positive, Negative).Wherein, Anchor faces training image is trained with Positive faces Image belongs to the same identity people, and Anchor faces training image belongs to different identity from Negative face training images People.Under the guidance for intersecting entity loss function, neural network model can gradually learn to the face characteristic for extracting following property: The characteristic distance of Anchor faces training image and Positive face training images be always less than Anchor faces training image with The characteristic distance of Negative face training images.
Assuming that i-th of triple is tuple i (Anchori, Positivei, Negativei), it is extracted from the triple The face Expressive Features of the every face training image gone out are respectively:(Pi Anchor, Pi Positive, Pi Negative), then depth is neural The target of the feature extraction parameter optimization of network is as follows:
T is the set that all triple face training images are constituted.Therefore, if Cross-entity Loss are L, come with L Indicate feature extraction effect, then the expression formula of L is as follows:
Wherein, δ is set as the index subscript that 0.5, i is triple, and K is the sum of training triple.
In the present embodiment, when being trained to deep neural network, using multiple face training images supervision message (including The position of face key point, attribute of face etc.) neural network is trained, helping, which enhances face characteristic extraction, appoints The study of business.Loss function value is smaller, and the description of person's feature extracted from image more has prejudgementing character, the journey of model optimization Degree is higher, is conducive to the robustness for improving face identification method and the processing capacity to complex situations.Due to mould in whole process The parameter of type is constantly to occur dynamically to change with the input of personage's training image, can realize adaptive parameter tune It is whole, therefore, better deep neural network training effect can be obtained by this method.
The embodiment of the present invention by images to be recognized by being adjusted to standard front face image, then the image with known piece identity It is compared one by one, the accuracy of recognition of face can be increased.Due to the use of multiple face training images as supervision message Source establish deep neural network, and to extract, therefore the character features of every image are all based on deep neural network It can learn and use the stronger character features of robustness, compared to traditional face identification method, the effect of recognition of face More preferably, under complicated environmental condition, stronger anti-interference ability can be possessed.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Corresponding to the face identification method based on deep learning described in foregoing embodiments, Fig. 6 shows implementation of the present invention Example provide the face identification device based on deep learning structure diagram, the face identification device can be software unit, The unit of hardware cell either soft or hard combination.For convenience of description, only the parts related to this embodiment are shown.
With reference to Fig. 6, which includes:
Training unit 61, for building and training the deep neural network based on face training image.
Acquiring unit 62, for obtaining images to be recognized.
Detection unit 63, for detecting the position of human face region in the images to be recognized and extracting the human face region Out.
Conversion unit 64 is input to described after converting the human face region image to standard front face facial image Deep neural network.
Output unit 65, for utilizing the deep neural network, export the expression of the standard front face facial image to Amount, the expression vector description face characteristic of the images to be recognized.
Recognition unit 66, for expression vector to be compared with each of face database face Expressive Features, with Obtain the face identity of the images to be recognized.
Optionally, the detection unit 63 includes:
Subelement is pre-processed, for being pre-processed to the images to be recognized.
Locator unit, for passing through pre-loaded Haar Face datection models, to described pretreated to be identified Image carries out the positioning of human face region.
Subelement is extracted, for the positioning according to the human face region, by the face area in the images to be recognized Domain extracts.
Optionally, the conversion unit 64 includes:
Subelement is marked, for marking the key point position in the human face region image.
Subelement is calibrated, for passing through key point position described in affine transformation function calibration, output calibration facial image.
Subelement is scaled, for the calibration facial image to be zoomed to pre-set dimension, to obtain the standard front face people Face image.
Optionally, the training unit 61 includes:
Subelement is initialized, for utilizing deep neural network described in the pre-training model initialization increased income.
Subelement is inputted, for multiple a plurality of types of face training images to be inputted the deep neural network.
Learn subelement, for the feature according to every face training image, is calculated using asynchronous stochastic gradient descent The feature extraction parameter of deep neural network described in calligraphy learning.
Computation subunit, for according to the feature extraction parameter, the depth to be calculated using entity loss function is intersected The feature extraction Effect value of neural network.
Iteration subelement, for enabling feature extraction parameter described in the deep neural network iterative learning, until the spy It levies extraction effect value and meets default optimization aim.
Optionally, the recognition unit 66 includes:
Second acquisition unit, for obtaining each face contrast images in face database.
Second detection unit, for for each face contrast images, detecting people in the face contrast images The position in face region, and the human face region is extracted as the second image.
Second conversion unit, after described every the second image is separately converted to standard front face facial image, input To the deep neural network.
Extraction unit, for extracting institute using the deep neural network according to every standard front face facial image State the face Expressive Features of every face contrast images.
Computing unit is described for calculating separately the expression vector with face described in every face contrast images The characteristic distance of feature.
Comparison unit is less than the face contrast images of predetermined threshold value, the face for obtaining the characteristic distance The corresponding face identity output of contrast images is the face identity of the images to be recognized.
In embodiments of the present invention, by the way that images to be recognized is adjusted to standard front face image, then with known piece identity Image compared one by one, the accuracy of recognition of face can be increased.Due to the use of multiple face training images as prison The source of information is superintended and directed to establish deep neural network, and the character features of every image are all based on deep neural network to carry It takes, therefore can learn and use the stronger character features of robustness, compared to traditional face identification method, recognition of face Effect it is more preferable, under complicated environmental condition, stronger anti-interference ability can be possessed.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work( Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device are divided into different functional units or module, more than completion The all or part of function of description.Each functional unit, module in embodiment can be integrated in a processing unit, also may be used It, can also be above-mentioned integrated during two or more units are integrated in one unit to be that each unit physically exists alone The form that hardware had both may be used in unit is realized, can also be realized in the form of SFU software functional unit.In addition, each function list Member, the specific name of module are also only to facilitate mutually distinguish, the protection domain being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device and method can pass through others Mode is realized.For example, system embodiment described above is only schematical, for example, the division of the module or unit, Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can be with In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling or direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING of device or unit or Communication connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can be stored in a computer read/write memory medium.Based on this understanding, the technical solution of the embodiment of the present invention Substantially all or part of the part that contributes to existing technology or the technical solution can be with software product in other words Form embody, which is stored in a storage medium, including some instructions use so that one Computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute this hair The all or part of step of bright each embodiment the method for embodiment.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, Read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic The various media that can store program code such as dish or CD.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed Or replace, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (8)

1. a kind of face identification method based on deep learning, which is characterized in that including:
The deep neural network based on face training image is built and trains, including:
Utilize deep neural network described in the pre-training model initialization increased income;
Multiple a plurality of types of face training images are inputted into the deep neural network;
According to the feature of every face training image, learn the deep neural network using asynchronous stochastic gradient descent algorithm Feature extraction parameter, including a triple is constituted with three face training images, be expressed as (Anchor, Positive, Negative), wherein Anchor faces training image belongs to the same identity people with Positive face training images, Anchor faces training image belongs to different identity people from Negative face training images;
According to the feature extraction parameter, the feature extraction that the deep neural network is calculated using intersection entity loss function is imitated Fruit value;
Feature extraction parameter described in the deep neural network iterative learning is enabled, is preset until the feature extraction Effect value meets Optimization aim;
Obtain images to be recognized;
It detects the position of human face region in the images to be recognized and extracts the human face region;
After converting the human face region image to standard front face facial image, it is input to the deep neural network;
Using the deep neural network, the expression vector of the standard front face facial image, the expression vector description are exported The face characteristic of the images to be recognized;
Expression vector is compared with each of face database face Expressive Features, to obtain the images to be recognized Face identity.
2. the method as described in claim 1, which is characterized in that the position of human face region in the detection images to be recognized And by the human face region extract including:
The images to be recognized is pre-processed;
By pre-loaded Haar Face datection models, human face region is carried out to the pretreated images to be recognized and is determined Position;
According to the positioning of the human face region, the human face region is extracted in the images to be recognized.
3. the method as described in claim 1, which is characterized in that described to convert the human face region image to standard front face people Face image includes:
Mark the key point position in the human face region image;
Pass through key point position described in affine transformation function calibration, output calibration facial image;
The calibration facial image is zoomed into pre-set dimension, to obtain the standard front face facial image.
4. method as claimed any one in claims 1 to 3, which is characterized in that described by expression vector and face database Each of face Expressive Features be compared, the face identity to obtain the images to be recognized includes:
Obtain each face contrast images in face database;
For each face contrast images, the position of human face region in the face contrast images is detected, and will be described Human face region is extracted as the second image;
After described every the second image is separately converted to standard front face facial image, it is input to the deep neural network;
According to every standard front face facial image every face comparison diagram is extracted using the deep neural network The face Expressive Features of picture;
Calculate separately the characteristic distance of the expression vector and face Expressive Features described in every face contrast images;
Obtain the face contrast images that the characteristic distance is less than predetermined threshold value, the corresponding face of the face contrast images Identity output is the face identity of the images to be recognized.
5. a kind of face identification device based on deep learning, which is characterized in that including:
Training unit, for building and training the deep neural network based on face training image,
The training unit includes:
Subelement is initialized, for utilizing deep neural network described in the pre-training model initialization increased income;
Subelement is inputted, for multiple a plurality of types of face training images to be inputted the deep neural network;
Learn subelement, for the feature according to every face training image, learns institute using asynchronous stochastic gradient descent algorithm The feature extraction parameter of deep neural network is stated, including a triple is constituted with three face training images, is expressed as (Anchor, Positive, Negative), wherein Anchor faces training image belongs to Positive face training images The same identity people, Anchor faces training image belong to different identity people from Negative face training images;
Computation subunit, for according to the feature extraction parameter, the depth nerve to be calculated using entity loss function is intersected The feature extraction Effect value of network;
Iteration subelement, for enabling feature extraction parameter described in the deep neural network iterative learning, until the feature carries It takes Effect value to meet and presets optimization aim;
Acquiring unit, for obtaining images to be recognized;
Detection unit, for detecting the position of human face region in the images to be recognized and extracting the human face region;
Conversion unit is input to the depth god after converting the human face region image to standard front face facial image Through network;
Output unit, for utilizing the deep neural network, the expression for exporting the standard front face facial image is vectorial, described Express the vector description face characteristic of the images to be recognized;
Recognition unit, for expression vector to be compared with each of face database face Expressive Features, to obtain State the face identity of images to be recognized.
6. device as claimed in claim 5, which is characterized in that the detection unit includes:
Subelement is pre-processed, for being pre-processed to the images to be recognized;
Locator unit, for passing through pre-loaded Haar Face datection models, to the pretreated images to be recognized Carry out the positioning of human face region;
Subelement is extracted, for the positioning according to the human face region, is carried the human face region in the images to be recognized It takes out.
7. device as claimed in claim 5, which is characterized in that the conversion unit includes:
Subelement is marked, for marking the key point position in the human face region image;
Subelement is calibrated, for passing through key point position described in affine transformation function calibration, output calibration facial image;
Subelement is scaled, for the calibration facial image to be zoomed to pre-set dimension, to obtain the standard front face face figure Picture.
8. the device as described in any one of claim 5 to 7, which is characterized in that the recognition unit includes:
Second acquisition unit, for obtaining each face contrast images in face database;
Second detection unit, for for each face contrast images, detecting face area in the face contrast images The position in domain, and the human face region is extracted as the second image;
Second conversion unit is input to institute after described every the second image is separately converted to standard front face facial image State deep neural network;
Extraction unit, for being extracted described every using the deep neural network according to every standard front face facial image Open the face Expressive Features of face contrast images;
Computing unit, for calculating separately the expression vector and face Expressive Features described in every face contrast images Characteristic distance;
Comparison unit is less than the face contrast images of predetermined threshold value, the face comparison for obtaining the characteristic distance The corresponding face identity output of image is the face identity of the images to be recognized.
CN201611158851.4A 2016-12-15 2016-12-15 A kind of face identification method and device based on deep learning Active CN107609459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611158851.4A CN107609459B (en) 2016-12-15 2016-12-15 A kind of face identification method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611158851.4A CN107609459B (en) 2016-12-15 2016-12-15 A kind of face identification method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN107609459A CN107609459A (en) 2018-01-19
CN107609459B true CN107609459B (en) 2018-09-11

Family

ID=61055407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611158851.4A Active CN107609459B (en) 2016-12-15 2016-12-15 A kind of face identification method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN107609459B (en)

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334863B (en) * 2018-03-09 2020-09-04 百度在线网络技术(北京)有限公司 Identity authentication method, system, terminal and computer readable storage medium
CN108427939B (en) * 2018-03-30 2022-09-23 百度在线网络技术(北京)有限公司 Model generation method and device
CN108520184A (en) * 2018-04-16 2018-09-11 成都博锐智晟科技有限公司 A kind of method and system of secret protection
CN108346208A (en) * 2018-04-19 2018-07-31 深圳安邦科技有限公司 A kind of face identification system of deep learning
CN108629330A (en) * 2018-05-22 2018-10-09 上海交通大学 Face dynamic based on multi-cascade grader captures and method for quickly identifying and system
CN108769520B (en) * 2018-05-31 2021-04-13 康键信息技术(深圳)有限公司 Electronic device, image processing method, and computer-readable storage medium
CN108921106B (en) * 2018-07-06 2021-07-06 重庆大学 Capsule-based face recognition method
CN110728168B (en) * 2018-07-17 2022-07-22 广州虎牙信息科技有限公司 Part recognition method, device, equipment and storage medium
CN110826371A (en) * 2018-08-10 2020-02-21 京东数字科技控股有限公司 Animal identification method, device, medium and electronic equipment
CN110837762B (en) * 2018-08-17 2022-09-27 南京理工大学 Convolutional neural network pedestrian recognition method based on GoogLeNet
CN109241890B (en) * 2018-08-24 2020-01-14 北京字节跳动网络技术有限公司 Face image correction method, apparatus and storage medium
CN111062400B (en) * 2018-10-16 2024-04-30 浙江宇视科技有限公司 Target matching method and device
CN111241892A (en) * 2018-11-29 2020-06-05 中科视语(北京)科技有限公司 Face recognition method and system based on multi-neural-network model joint optimization
CN109657595B (en) * 2018-12-12 2023-05-02 中山大学 Key feature region matching face recognition method based on stacked hourglass network
CN109784255B (en) * 2019-01-07 2021-12-14 深圳市商汤科技有限公司 Neural network training method and device and recognition method and device
CN109886157A (en) * 2019-01-30 2019-06-14 杭州芯影科技有限公司 A kind of face identification method and system based on millimeter-wave image
CN109948568A (en) * 2019-03-26 2019-06-28 东华大学 Embedded human face identifying system based on ARM microprocessor and deep learning
CN110059645A (en) * 2019-04-23 2019-07-26 杭州智趣智能信息技术有限公司 A kind of face identification method, system and electronic equipment and storage medium
CN111860069A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Image processing method and system
WO2020237482A1 (en) * 2019-05-27 2020-12-03 深圳市汇顶科技股份有限公司 Optical sensor, apparatus and method for facial recognition, and electronic device
CN110337656A (en) * 2019-05-27 2019-10-15 深圳市汇顶科技股份有限公司 For the optical sensor of recognition of face, device, method and electronic equipment
CN110458134B (en) * 2019-08-17 2020-06-16 南京昀趣互动游戏有限公司 Face recognition method and device
CN110633655A (en) * 2019-08-29 2019-12-31 河南中原大数据研究院有限公司 Attention-attack face recognition attack algorithm
CN110598638A (en) * 2019-09-12 2019-12-20 Oppo广东移动通信有限公司 Model training method, face gender prediction method, device and storage medium
CN110852150B (en) * 2019-09-25 2022-12-20 珠海格力电器股份有限公司 Face verification method, system, equipment and computer readable storage medium
CN110909618B (en) * 2019-10-29 2023-04-21 泰康保险集团股份有限公司 Method and device for identifying identity of pet
CN110796112A (en) * 2019-11-05 2020-02-14 青岛志泊电子信息科技有限公司 In-vehicle face recognition system based on MATLAB
CN111104852B (en) * 2019-11-06 2020-10-16 重庆邮电大学 Face recognition technology based on heuristic Gaussian cloud transformation
CN111275005B (en) * 2020-02-21 2023-04-07 腾讯科技(深圳)有限公司 Drawn face image recognition method, computer-readable storage medium and related device
CN111291711A (en) * 2020-02-25 2020-06-16 山东超越数控电子股份有限公司 Python-based deep learning face recognition method, equipment and readable storage medium
CN111783605B (en) * 2020-06-24 2024-05-24 北京百度网讯科技有限公司 Face image recognition method, device, equipment and storage medium
CN111783681B (en) * 2020-07-02 2024-08-13 深圳市万睿智能科技有限公司 Large-scale face library identification method, system, computer equipment and storage medium
CN111768511A (en) * 2020-07-07 2020-10-13 湖北省电力装备有限公司 Staff information recording method and device based on cloud temperature measurement equipment
CN111797793A (en) * 2020-07-10 2020-10-20 重庆三峡学院 Campus identity intelligent management system based on face recognition technology
CN111797792A (en) * 2020-07-10 2020-10-20 重庆三峡学院 Novel identity recognition device and method based on campus management
CN112069895A (en) * 2020-08-03 2020-12-11 广州杰赛科技股份有限公司 Small target face recognition method and device
CN112784712B (en) * 2021-01-08 2023-08-18 重庆创通联智物联网有限公司 Missing child early warning implementation method and device based on real-time monitoring
CN113743176A (en) * 2021-01-29 2021-12-03 北京沃东天骏信息技术有限公司 Image recognition method, device and computer readable storage medium
CN112863593B (en) * 2021-02-05 2024-02-20 厦门大学 Identification feature extraction method and system based on skin metagenome data
CN115311705B (en) * 2022-07-06 2023-08-15 南京邮电大学 Face cloud recognition system based on deep learning
CN116912918B (en) * 2023-09-08 2024-01-23 苏州浪潮智能科技有限公司 Face recognition method, device, equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978550A (en) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 Face recognition method and system based on large-scale face database
CN105117692A (en) * 2015-08-05 2015-12-02 福州瑞芯微电子股份有限公司 Real-time face identification method and system based on deep learning
CN105512638A (en) * 2015-12-24 2016-04-20 黄江 Fused featured-based face detection and alignment method
CN105550642A (en) * 2015-12-08 2016-05-04 康佳集团股份有限公司 Gender identification method and system based on multi-scale linear difference characteristic low-rank expression
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978550A (en) * 2014-04-08 2015-10-14 上海骏聿数码科技有限公司 Face recognition method and system based on large-scale face database
CN105117692A (en) * 2015-08-05 2015-12-02 福州瑞芯微电子股份有限公司 Real-time face identification method and system based on deep learning
CN105550642A (en) * 2015-12-08 2016-05-04 康佳集团股份有限公司 Gender identification method and system based on multi-scale linear difference characteristic low-rank expression
CN105512638A (en) * 2015-12-24 2016-04-20 黄江 Fused featured-based face detection and alignment method
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus

Also Published As

Publication number Publication date
CN107609459A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107609459B (en) A kind of face identification method and device based on deep learning
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN106372581B (en) Method for constructing and training face recognition feature extraction network
CN104036255B (en) A kind of facial expression recognizing method
Silberman et al. Instance segmentation of indoor scenes using a coverage loss
Lajevardi et al. Higher order orthogonal moments for invariant facial expression recognition
Wu et al. Face alignment via boosted ranking model
CN109934293A (en) Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN110909618B (en) Method and device for identifying identity of pet
CN104361313B (en) A kind of gesture identification method merged based on Multiple Kernel Learning heterogeneous characteristic
CN104834941A (en) Offline handwriting recognition method of sparse autoencoder based on computer input
CN102968626B (en) A kind of method of facial image coupling
CN105354565A (en) Full convolution network based facial feature positioning and distinguishing method and system
JP5120238B2 (en) Object area extraction apparatus and object area extraction program
CN109002463A (en) A kind of Method for text detection based on depth measure model
Vinoth Kumar et al. A decennary survey on artificial intelligence methods for image segmentation
Lu et al. A novel synergetic classification approach for hyperspectral and panchromatic images based on self-learning
Kim et al. A shape preserving approach for salient object detection using convolutional neural networks
Zhu et al. Multiple human identification and cosegmentation: A human-oriented CRF approach with poselets
CN112200216A (en) Chinese character recognition method, device, computer equipment and storage medium
US11521427B1 (en) Ear detection method with deep learning pairwise model based on contextual information
WO2022062403A9 (en) Expression recognition model training method and apparatus, terminal device and storage medium
Dalara et al. Entity Recognition in Indian Sculpture using CLAHE and machine learning
Li et al. Multi-level Fisher vector aggregated completed local fractional order derivative feature vector for face recognition
CN109543590B (en) Video human behavior recognition algorithm based on behavior association degree fusion characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1244335

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant