Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, in one embodiment, provide a kind of face comparison method, which both can be with
Applied to terminal, server also can be applied to, specifically includes the following steps:
Step 102, target facial image to be identified is obtained.
Wherein, target facial image is facial image to be identified.The acquisition of target facial image can be taken the photograph by calling
It is obtained as head is directly shot, is also possible to the stored facial image got.In one embodiment, target facial image
To carry out Face datection by the initial pictures obtained to shooting, then carry out what facial image extracted.
In another embodiment, in order to improve the accuracy of subsequent comparison, after getting facial image, according to face
Key feature points in image carry out affine transformation to facial image, then obtain target facial image, i.e. target facial image
For the facial image after affine transformation.Key feature points can use the point and two of the point of two eye centers, tip of the nose
The point of a corners of the mouth.
Step 104, using target facial image as the input of grid mapping model, grid mapping model is used for according to extraction
The face characteristic arrived carries out grid division to target facial image, obtains the target grating image of grid mapping model output, mesh
Marking includes multiple grid areas in grating image, and the corresponding grid mark of each grid area, grid mark is for uniquely marking
Know one piece of face area.
Wherein, grid mapping model is used to carry out rasterizing to target facial image, will according to the face characteristic extracted
Target facial image carries out grid division, and the purpose of grid division is that face different parts are divided into different grid areas,
The corresponding grid mark of each grid area, is used for one piece of face position (face area) of unique identification.As shown in Fig. 2, being
In one embodiment, to the schematic diagram for the grating image that facial image obtain after rasterizing, grating image includes multiple lattice
Sub (i.e. grid area), each grid correspond to the specific face area of face.
Step 106, the corresponding registration grating image of registered face image in face database is obtained.
Wherein, registration grating image, which refers to, carries out the grating image obtained after grid division to registered face image.Registration
It include multiple grid areas in grating image, each grid area is corresponding with grid mark.It include note in face database
Volume face image set includes multiple registered face images in registered face image set.Registered face image can be the figure of positive face
Picture is also possible to the image of non-positive face.
Step 108, according to each grid area corresponding grid mark by target grating image and registration grating image into
Row aspect ratio pair obtains face alignment result.
Wherein, registering also includes multiple grid areas in grating image, and each grid area has corresponding grid to identify.
Same grid identifies corresponding face position and is the same.Different faces all includes the grid area of identical quantity, often
A grid area is corresponding with grid mark, and the grid mark of different faces is also one-to-one relationship.For example, being wrapped in A face
36 grid are included, each grid is corresponding with corresponding serial number, 36, B for example, the serial number of 36 grid is respectively 1,2,3 ...
It include similarly 36 grid in face, then serial number is respectively 1,2,3 ... 36, and the corresponding human face region of same serial number is phase
With.So after the corresponding grating image of known target facial image, it can be pointedly by same face according to grid mark
Portion region carries out aspect ratio pair, to be conducive to improve the accuracy compared.
Carry out face characteristic comparison mode can there are two types of, a kind of mode is, can according to preset grid sequence, point
The face characteristic for indescribably taking each grid area in target grating image obtains target face feature vector, then obtains registration
The registered face feature vector that grating image is extracted according to same grid sequence, then by target face feature vector and note
Volume face feature vector is compared, and obtains face alignment result.
Another way is: the corresponding face characteristic of each grid area can be extracted respectively, then by same grid mark
The face characteristic for knowing corresponding grid area is compared, and respectively obtains comparison result corresponding with each grid area, finally
Final face alignment result is obtained by weighted sum.
Above-mentioned face comparison method, by obtaining output using target facial image as the input of grid mapping model
Target grating image, includes multiple grid areas in target grating image, and the corresponding grid mark of each grid area is different
Grid mark represents is the different position of face, according to grid area corresponding grid mark by target grating image and note
Volume grating image carries out aspect ratio pair, can targetedly be compared the same position of face, will not be due to collected
The angle problem of facial image influences to compare accuracy, to substantially increase the accuracy of face alignment.
In one embodiment, grating image is registered as registered standard grating image, it is corresponding according to each grid area
Target grating image and registration grating image are carried out aspect ratio pair by grid mark, obtain face alignment result, comprising: according to every
Multiple grid areas in target grating image are standardized by the corresponding grid mark of a grid area, are obtained and target person
Face image corresponding target criteria grating image includes predetermined number in target criteria grating image and registered standard grating image
Standard grid, each standard grid be corresponding with grid mark;By target criteria grating image and registered standard grating image into
Row aspect ratio pair obtains face alignment result.
Wherein, standardization refers to the grid area that the grid area in grating image is converted to normal size.Target person
Due to the difference of facial angle in face image, the grid area extracted may be that not of uniform size (i.e. the size of grid is not
Together).In order to improve the accuracy of subsequent face alignment, grid area is all converted into normal size by scaling.
If a target facial image not instead of face image, an inclined face image, for example, being a face to the left
Image, then the size that the face of the right and left is presented is grid size inconsistent, that correspondingly different face areas extract
It is different.The corresponding face area of each grid area, under different faces angle, same face area is shown
Size is different.As shown in figure 3, side face image carries out the schematic diagram after grid division, grid in figure in one embodiment
Size be it is different, after being standardized, grid is respectively converted into normal size, and correspond to corresponding grid area, obtained
To target criteria grating image.
It is the grating image for including fixed size grid in registered standard grating image, by that will register in grating image
Grid be converted to the grid of normal size and can be obtained by registered standard grating image.It, will in order to improve the accuracy of comparison
Target grating image, which is equally standardized, is converted to target criteria grating image, specifically by by target grating image
In grid area by scale conversion be fixed size standard grid area.Then by the target criteria grid map after conversion
As carrying out aspect ratio pair with registered standard grating image, the result compared in this way can be more acurrate.
In one embodiment, target criteria grating image and registered standard grating image are subjected to aspect ratio pair, obtained
Face alignment result, comprising: using target criteria grating image as the input of Feature Selection Model, it is defeated to obtain Feature Selection Model
Target face feature vector corresponding with target criteria grating image out;Obtain the corresponding registrant of registered standard grating image
Face feature vector;Aspect ratio pair is carried out according to target face feature vector and registered face feature vector, determines face alignment knot
Fruit.
Wherein, Feature Selection Model is used to carry out feature extraction to the face characteristic in target criteria grating image to obtain mesh
Mark face feature vector.Feature Selection Model can be realized using deep neural network model.In target criteria grating image
Include the standard grid of predetermined number, the corresponding face characteristic of each standard grid is extracted respectively, obtains grid spy
Vector is levied, grid feature vector is then combined into target person face feature vector according to preset order group.In one embodiment, false
If one shares 6 × 6 grid, it is assumed that the characteristic dimension extracted to each grid is (m × n × k), then 6 × 6 grid are last
Characteristic dimension is 6 × 6 × m × n × k.
Registered face feature vector is obtained by same mode, in order to save comparison time, will be infused one in advance
In a embodiment, it is assumed that one shares 6 × 6 grid, it is assumed that it is (m × n × k) to the characteristic dimension that each grid extracts, then 6 ×
The last characteristic dimension of 6 grid is 6 × 6 × m × n × k.Registered standard grating image extracts to obtain by Feature Selection Model
Registered face feature vector, then with registered face image associated storage.It is subsequent when being compared, registrant can be directly acquired
Face feature vector.
As shown in figure 4, in one embodiment it is proposed that a kind of face comparison method, comprising:
Step 402, target facial image to be identified is obtained.
Step 404, using target facial image as the input of grid mapping model, grid mapping model is used for according to extraction
The face characteristic arrived carries out grid division to target facial image, obtains the target grating image of grid mapping model output, mesh
Marking includes multiple grid areas in grating image, the corresponding grid mark of each grid area.
Step 406, it is identified according to the corresponding grid of each grid area by multiple grid areas in target grating image
It is standardized, obtains target criteria grating image corresponding with target facial image, include pre- in target criteria grating image
If the standard grid of number, each standard grid is corresponding with grid mark.
Step 408, using target criteria grating image as the input of Feature Selection Model, Feature Selection Model output is obtained
Target face feature vector corresponding with target criteria grating image.
Step 410, the corresponding registered face feature vector of registered standard grating image is obtained.
Step 412, using target criteria grating image as the input of grid characteristic coefficient model, grid characteristic coefficient is obtained
The target signature coefficient vector of model output, includes feature corresponding with each grid area system in target signature coefficient vector
Number.
Wherein, grid characteristic coefficient model is used to that each grid area in target criteria grating image to be arranged corresponding
Characteristic coefficient (weighting coefficient), and export target signature coefficient vector.The purpose of target signature coefficient vector is in order to enable people
Having distinctive fine feature on the face can amplify (i.e. enlarging features coefficient), or reduce the feature for not having characteristic on face
Coefficient.Since some fine features can be used to distinguish very much like face on face, for example, if there is black mole on face,
This black mole can be used as the obvious characteristic of this people, by will include black mole this grid area characteristic coefficient (power
Weight coefficient) increase the accuracy for being conducive to improve face alignment.
That is, increased the characteristic coefficient in region believable in face by setting characteristic coefficient, it will be incredible
The characteristic coefficient in region is reduced, for example, being illuminated by the light condition influence, the corresponding face of the grid area at some positions of face may base
Originally (feature of the grid area is insincere at this time) is not seen, then this when should just reduce the feature system of the grid area
Number, therefore, grid characteristic coefficient model is in order to enable the weight coefficient of the feature with obvious distinction increases, so that not having
The weight coefficient of the feature of distinction is reduced, at the same can also reduce those due to light, the power in the region not seen such as block
Weight coefficient, can be improved the accuracy of subsequent comparison in this way.
Grid characteristic coefficient model can be used the training of deep learning network model and obtain, by obtaining training grating image
Sample artificially marks the corresponding coefficient of grid area each in training grating image sample, then using training grid
Input of the image pattern as grid characteristic coefficient model, by corresponding mark as desired output to grid characteristic coefficient mould
Type is trained, and obtains target grid characteristic coefficient model.
Step 414, target face feature vector and registered face feature vector are carried out according to target signature coefficient vector
It updates, obtain updating target face feature vector and updates registered face feature vector.
Wherein, target signature coefficient vector includes characteristic coefficient corresponding with each grid area.Obtaining target signature
After coefficient vector, target face feature vector and registered face feature vector are updated according to target signature coefficient vector.It updates
Method is to be updated the characteristic coefficient of grid area each in target signature coefficient vector to the feature of corresponding grid area,
Will the corresponding feature vector of each grid area multiplied by corresponding characteristic coefficient.Finally obtain updated update target face
Feature vector and update registered face feature vector.After target feature vector has been determined, due to being finally to compare target face
Feature vector and registered face feature vector, so to update target face feature vector and registered face feature vector simultaneously.
For example, face A vector is [1,2,3], the vector of face B is [5,6,7] in database, is input to grid feature system to face A
After exponential model, obtained target signature coefficient vector is [1.0,0.1,0.5], then the updated feature vector of A is [1 × 1.0,2
× 0.1,3 × 0.5], the updated feature vector of B is [5 × 1.0,6 × 0.1,7 × 0.5].
Step 416, it is compared according to update target face feature vector and update registered face feature vector, determines ratio
To result.
Wherein, calculate and update target face feature vector and update the distance between registered face feature vector, according to away from
From determining whether for the same face, so that it is determined that comparison result.The calculating of distance can be calculated using Euclidean distance formula,
Other distance calculation formulas can certainly be used, for example, the methods of chi-Square measure etc., cosine similarity.
Above-mentioned face comparison method, can there are still can in the case where partial occlusion or bad optical fiber in facial image
It is enough to be accurately compared.
As shown in figure 5, in one embodiment, obtaining the corresponding registered face feature vector of registered face image, comprising:
Step 502, using registered standard grating image as the input of registration feature Modulus Model, registration feature Modulus Model
For being adjusted to the corresponding weight coefficient of grid area in registered standard grating image, registration feature Modulus Model is obtained
The registration feature coefficient vector of output;
Step 504, using registered standard grating image as the input of Feature Selection Model, the initial registration for obtaining output is special
Levy vector;
Step 506, corresponding with registered face image according to registration feature coefficient vector and the determination of initial registration feature vector
Registered face feature vector.
Wherein, registration feature Modulus Model is used for grid area in the corresponding registered standard grating image of registered face image
The weight coefficient in domain is adjusted, the training method of registration feature Modulus Model and the training method phase of grid characteristic coefficient model
Together.Feature Selection Model to the feature in grating image for extracting to obtain feature vector.By registered standard grating image
As the input of Feature Selection Model, the initial registration feature vector that is exported.Finally use registration feature coefficient vector pair
Initial registration feature vector is adjusted, and obtains registered face feature vector.The mesh that initial registration feature vector is adjusted
Be by adjusting coefficient reduce the same person the distance between face characteristic, increase between the face characteristic of different people
Distance.Being adjusted to initial registration feature vector, which may be implemented, carries out dynamic adjustment according to the clarity of registered face, for example,
A certain piece of illumination condition is bad, can turn down the coefficient of this part, is compared convenient for subsequent, improves the accuracy of comparison.
In one embodiment, aspect ratio pair is carried out according to target face feature vector and registered face feature vector, really
Determine face alignment result, comprising: calculate the target face feature vector spy between each registered face feature vector respectively
Levy distance;Face alignment knot is determined according to the characteristic distance between target face feature vector and each registered face feature vector
Fruit.
Wherein, face alignment be by calculate the feature between target face feature vector and registered face feature vector away from
From what is obtained, characteristic distance is smaller to illustrate that face is more similar, when characteristic distance be less than it is preset apart from when, then it is assumed that be same
People.The calculating of characteristic distance can be calculated using Euclidean distance formula, naturally it is also possible in such a way that other distances calculate.
If target facial image is not less than preset distance at a distance from each registered face image in registered face library, say
The bright target facial image is not present in the registered face library.When being calculated with more than two registered face images (when depositing
In the case where twins or multiparity) the distance between be less than pre-determined distance, then by the corresponding registered face figure of minimum range
As with the immediate facial image of target facial image.
In one embodiment, target facial image to be identified is obtained, further includes: initial pictures to be identified are obtained,
It include face in initial pictures;Face in initial pictures is detected, facial image is extracted;According in facial image
Facial image progress affine transformation is obtained target facial image by key feature points.
Wherein, initial pictures refer to that shooting obtained includes the image of face.In order to carry out face alignment, to initial graph
Face as in is detected, and facial image is extracted, and then carries out affine transformation according to the key feature points in facial image
Obtain target facial image.The purpose of affine transformation is to be aligned face according to standard mode.For example, for 1024 ×
Key point (for example, the position at the center of eyes, tip of the nose and two corners of the mouths) is aligned by the target facial image of 1024 sizes
To corresponding position, to obtain the target facial image an of standard.Facial image is pre-processed by affine transformation,
The accuracy of subsequent comparison can be improved.As shown in fig. 6, facial image progress affine transformation is obtained in one embodiment
The schematic diagram of target facial image.Firstly, detecting the key feature points on facial image to be identified, standard form is then switched to
In preset key feature points position after, obtain target facial image.
In one embodiment, grid mapping model is obtained based on deep neural network model training;Grid mapping
The following steps are included: obtaining training face image set, it includes multiple trained face figures that training facial image, which is concentrated, for the training of model
Picture;Obtain training facial image corresponding facial image mark, facial image be labeled as include multiple grid areas training
Grating image;Using training facial image as the input of grid mapping model, grating image will be trained as desired accordingly
Output is trained grid mapping model, obtains trained grid mapping model.
Wherein, grid mapping model is obtained using deep neural network model training, for example, can be using convolution mind
It is obtained through network model training.Specifically trained mode is as follows: obtain training face image set, and with each trained face figure
As corresponding facial image mark, facial image be labeled as include multiple grid areas training grating image.It will be
The grating image of grid area has been divided as mark.It then can be using training facial image as the defeated of grid mapping model
Enter, will accordingly include that the training grating images of multiple grid areas carries out model training as corresponding mark, then
To trained grid mapping model.
In one embodiment, trained general process an are as follows: loss function (loss function) is defined, by facial image
As the input of grid mapping model, obtain a reality output, according to loss function calculate reality output and desired output it
Between distance, then the parameter in grid mapping model can be adjusted using gradient descent method, until being calculated
The value of the loss function arrived is less than preset value, i.e., until the gap of reality output and desired output becomes very little.
As shown in fig. 7, in one embodiment it is proposed that a kind of face alignment device, the device include:
First obtains module 702, for obtaining target facial image to be identified;
Grid mapping block 704, for using the target facial image as the input of grid mapping model, the grid
Mapping model is used to carry out grid division to the target facial image according to the face characteristic extracted, obtains the grid and reflects
The target grating image of model output is penetrated, includes multiple grid areas in the target grating image, each grid area is corresponding
One grid mark;
Second obtains module 706, for obtaining the corresponding registration grating image of the registered face image in face database;
Comparison module 708, for according to the corresponding grid mark of each grid area by the target grating image
Aspect ratio pair is carried out with the registration grating image, obtains face alignment result.
In one embodiment, for the grating image of registering as registered standard grating image, comparison module 708 is also used to root
Multiple grid areas in the target grating image are standardized according to each grid area corresponding grid mark,
Obtain target criteria grating image corresponding with the target facial image, the target criteria grating image and registered standard grid
It include the standard grid of predetermined number in table images, each standard grid is corresponding with grid mark;By the target criteria grid
Image and the registered standard grating image carry out aspect ratio pair, obtain face alignment result.
In one embodiment, comparison module 708 is also used to using the target criteria grating image as feature extraction mould
The input of type, obtain the target face characteristic corresponding with the target criteria grating image of Feature Selection Model output to
Amount;Obtain the corresponding registered face feature vector of the registered standard grating image;According to the target face feature vector with
The registered face feature vector carries out aspect ratio pair, determines face alignment result.
As shown in figure 8, in one embodiment, above-mentioned face alignment device further include:
Characteristic coefficient determining module 710, for using the target criteria grating image as grid characteristic coefficient model
Input obtains the target signature coefficient vector of the grid characteristic coefficient model output, wraps in the target signature coefficient vector
Include characteristic coefficient corresponding with each grid area;
Update module 712, for according to the target signature coefficient vector to the target face feature vector and described
Registered face feature vector is updated, and is obtained updating target face feature vector and is updated registered face feature vector;
The comparison module is also used to according to the update target face feature vector and the update registered face feature
Vector is compared, and determines comparison result.
In one embodiment, the comparison module 708 is also used to special using the registered standard grating image as registration
The input of Modulus Model is levied, the registration feature Modulus Model is used for the grid area pair in the registered standard grating image
The weight coefficient answered is adjusted, and obtains the registration feature coefficient vector of the registration feature Modulus Model output;By the note
Input of the volume standard grating image as the Feature Selection Model, obtains the initial registration feature vector of output;According to described
Registration feature coefficient vector and the initial registration feature vector determine that registered face corresponding with the registered face image is special
Levy vector.
In one embodiment, comparison module is also used to calculate the target face feature vector respectively and described in each
Characteristic distance between registered face feature vector;According to the target face feature vector and each registered face feature
Characteristic distance between vector determines the face alignment result.
In one embodiment, the first acquisition module is also used to obtain initial pictures to be identified, in the initial pictures
Including face;Face in the initial pictures is detected, facial image is extracted;According to the pass in the facial image
Facial image progress affine transformation is obtained target facial image by key characteristic point.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;It is above-mentioned
Face alignment device further include:
Grid mapping model training module, for obtaining trained face image set, the trained facial image concentration includes
Multiple trained facial images;The corresponding facial image mark of the trained facial image is obtained, the facial image is labeled as wrapping
Training grating image containing multiple grid areas;Using the trained facial image as the input of the grid mapping model,
The grid mapping model is trained using the corresponding trained grating image as desired output, is obtained trained
Grid mapping model.
Fig. 9 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be clothes
Business device and terminal device, the server include but is not limited to high-performance computer and high-performance computer cluster;The terminal
Equipment includes but is not limited to mobile terminal device and terminal console equipment, the mobile terminal device include but is not limited to mobile phone,
Tablet computer, smartwatch and laptop, the terminal console equipment includes but is not limited to desktop computer and vehicle-mounted computer.
As shown in figure 9, the computer equipment includes processor, memory and the network interface connected by system bus.Wherein, it stores
Device includes non-volatile memory medium and built-in storage.The non-volatile memory medium of the computer equipment is stored with operation system
System, can also be stored with computer program, when which is executed by processor, processor may make to realize face alignment side
Method.Computer program can also be stored in the built-in storage, when which is executed by processor, processor may make to hold
Row face comparison method.It will be understood by those skilled in the art that structure shown in Fig. 9, only related to application scheme
Part-structure block diagram, do not constitute the restriction for the computer equipment being applied thereon to application scheme, it is specific to count
Calculating machine equipment may include perhaps combining certain components or with different portions than more or fewer components as shown in the figure
Part arrangement.
In one embodiment, face comparison method provided by the present application can be implemented as a kind of shape of computer program
Formula, computer program can be run in computer equipment as shown in Figure 9.A group adult can be stored in the memory of computer equipment
Each process template of face comparison device.For example, first obtains module 702, grid mapping block 704, second obtains module
706, comparison module 708.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, the processor realize following steps when executing the computer program: obtaining to be identified
Target facial image;Using the target facial image as the input of grid mapping model, the grid mapping model is used for root
Grid division is carried out to the target facial image according to the face characteristic extracted, obtains the mesh of the grid mapping model output
Grating image is marked, includes multiple grid areas, the corresponding grid mark of each grid area in the target grating image;It obtains
Take the corresponding registration grating image of the registered face image in face database;According to the corresponding grid of each grid area
The target grating image and the registration grating image are carried out aspect ratio pair by mark, obtain face alignment result.
In one embodiment, the grating image of registering is described according to each lattice as registered standard grating image
The target grating image and the registration grating image are carried out aspect ratio pair by the corresponding grid mark of subregion, obtain face
Comparison result, comprising: according to the corresponding grid mark of each grid area by multiple lattice in the target grating image
Subregion is standardized, and obtains target criteria grating image corresponding with the target facial image, the target criteria grid
It include the standard grid of predetermined number in table images and registered standard grating image, each standard grid is corresponding with grid mark;
The target criteria grating image and the registered standard grating image are subjected to aspect ratio pair, obtain face alignment result.
In one embodiment, described that the target criteria grating image and the registered standard grating image are subjected to spy
Sign compares, and obtains face alignment result, comprising: using the target criteria grating image as the input of Feature Selection Model, obtain
Take the target face feature vector corresponding with the target criteria grating image of the Feature Selection Model output;Described in acquisition
The corresponding registered face feature vector of registered standard grating image;According to the target face feature vector and the registered face
Feature vector carries out aspect ratio pair, determines face alignment result.
In one embodiment, when the computer program is executed by the processor, it is also used to execute following steps: will
Input of the target criteria grating image as grid characteristic coefficient model obtains the grid characteristic coefficient model output
Target signature coefficient vector includes characteristic coefficient corresponding with each grid area in the target signature coefficient vector;According to
The target signature coefficient vector is updated the target face feature vector and the registered face feature vector, obtains
It updates target face feature vector and updates registered face feature vector;It is described according to the target face feature vector with it is described
Registered face feature vector is compared, and determines face alignment result, comprising: according to the update target face feature vector and
The update registered face feature vector is compared, and determines comparison result.
In one embodiment, described to obtain the corresponding registered face feature vector of the registered face image, comprising: will
Input of the registered standard grating image as registration feature Modulus Model, the registration feature Modulus Model are used for described
The corresponding weight coefficient of grid area in registered standard grating image is adjusted, and it is defeated to obtain the registration feature Modulus Model
Registration feature coefficient vector out;Using the registered standard grating image as the input of the Feature Selection Model, obtain defeated
Initial registration feature vector out;According to the registration feature coefficient vector and the initial registration feature vector it is determining with it is described
The corresponding registered face feature vector of registered face image.
In one embodiment, described to be carried out according to the target face feature vector and the registered face feature vector
Aspect ratio pair determines face alignment result, comprising: calculate the target face feature vector respectively with registrant described in each
Characteristic distance between face feature vector;According to the target face feature vector and each registered face feature vector it
Between characteristic distance determine the face alignment result.
In one embodiment, described to obtain target facial image to be identified, comprising: to obtain initial graph to be identified
Picture includes face in the initial pictures;Face in the initial pictures is detected, facial image is extracted;According to
Facial image progress affine transformation is obtained target facial image by the key feature points in the facial image.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;It is described
The following steps are included: obtaining training face image set, it includes more that the trained facial image, which is concentrated, for the training of grid mapping model
A trained facial image;Obtain the trained facial image corresponding facial image mark, the facial image be labeled as include
There is the training grating image of multiple grid areas;It, will using the trained facial image as the input of the grid mapping model
The corresponding trained grating image is trained the grid mapping model as desired output, obtains trained grid
Lattice mapping model.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, feature
It is, the computer program realizes following steps when being executed by processor: obtains target facial image to be identified;It will be described
Input of the target facial image as grid mapping model, the grid mapping model are used for according to the face characteristic pair extracted
The target facial image carries out grid division, obtains the target grating image of the grid mapping model output, the target
It include multiple grid areas, the corresponding grid mark of each grid area in grating image;Obtain the note in face database
The corresponding registration grating image of volume facial image;According to the corresponding grid mark of each grid area by the target grid
Image and the registration grating image carry out aspect ratio pair, obtain face alignment result.
In one embodiment, the grating image of registering is described according to each lattice as registered standard grating image
The target grating image and the registration grating image are carried out aspect ratio pair by the corresponding grid mark of subregion, obtain face
Comparison result, comprising: according to the corresponding grid mark of each grid area by multiple lattice in the target grating image
Subregion is standardized, and obtains target criteria grating image corresponding with the target facial image, the target criteria grid
It include the standard grid of predetermined number in table images and registered standard grating image, each standard grid is corresponding with grid mark;
The target criteria grating image and the registered standard grating image are subjected to aspect ratio pair, obtain face alignment result.
In one embodiment, described that the target criteria grating image and the registered standard grating image are subjected to spy
Sign compares, and obtains face alignment result, comprising: using the target criteria grating image as the input of Feature Selection Model, obtain
Take the target face feature vector corresponding with the target criteria grating image of the Feature Selection Model output;Described in acquisition
The corresponding registered face feature vector of registered standard grating image;According to the target face feature vector and the registered face
Feature vector carries out aspect ratio pair, determines face alignment result.
In one embodiment, when the computer program is executed by the processor, it is also used to execute following steps: will
Input of the target criteria grating image as grid characteristic coefficient model obtains the grid characteristic coefficient model output
Target signature coefficient vector includes characteristic coefficient corresponding with each grid area in the target signature coefficient vector;According to
The target signature coefficient vector is updated the target face feature vector and the registered face feature vector, obtains
It updates target face feature vector and updates registered face feature vector;It is described according to the target face feature vector with it is described
Registered face feature vector is compared, and determines face alignment result, comprising: according to the update target face feature vector and
The update registered face feature vector is compared, and determines comparison result.
In one embodiment, described to obtain the corresponding registered face feature vector of the registered face image, comprising: will
Input of the registered standard grating image as registration feature Modulus Model, the registration feature Modulus Model are used for described
The corresponding weight coefficient of grid area in registered standard grating image is adjusted, and it is defeated to obtain the registration feature Modulus Model
Registration feature coefficient vector out;Using the registered standard grating image as the input of the Feature Selection Model, obtain defeated
Initial registration feature vector out;According to the registration feature coefficient vector and the initial registration feature vector it is determining with it is described
The corresponding registered face feature vector of registered face image.
In one embodiment, described to be carried out according to the target face feature vector and the registered face feature vector
Aspect ratio pair determines face alignment result, comprising: calculate the target face feature vector respectively with registrant described in each
Characteristic distance between face feature vector;According to the target face feature vector and each registered face feature vector it
Between characteristic distance determine the face alignment result.
In one embodiment, described to obtain target facial image to be identified, comprising: to obtain initial graph to be identified
Picture includes face in the initial pictures;Face in the initial pictures is detected, facial image is extracted;According to
Facial image progress affine transformation is obtained target facial image by the key feature points in the facial image.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;
The training of the grid mapping model is the following steps are included: obtain training face image set, the trained face figure
It include multiple trained facial images in image set;Obtain the corresponding facial image mark of the trained facial image, the face figure
As be labeled as include multiple grid areas training grating image;Mould is mapped using the trained facial image as the grid
The input of type is trained the grid mapping model for the corresponding trained grating image as desired output, obtains
To trained grid mapping model.
It should be noted that above-mentioned face comparison method, face alignment device, computer equipment and computer-readable storage
Medium belongs to a total inventive concept, face comparison method, face alignment device, computer equipment and computer-readable storage
Content in media embodiment can be mutually applicable in.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.