CN109117726A - A kind of identification authentication method, device, system and storage medium - Google Patents
A kind of identification authentication method, device, system and storage medium Download PDFInfo
- Publication number
- CN109117726A CN109117726A CN201810751498.3A CN201810751498A CN109117726A CN 109117726 A CN109117726 A CN 109117726A CN 201810751498 A CN201810751498 A CN 201810751498A CN 109117726 A CN109117726 A CN 109117726A
- Authority
- CN
- China
- Prior art keywords
- face
- vector
- user
- matching characteristic
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
The present invention relates to a kind of identification authentication method, device, system and storage mediums, comprising: obtains the space coordinate of all real features points in user face;Corresponding face wire frame model is constructed according to the space coordinate of all real features points, and face feature vector is calculated by face wire frame model;Face feature vector is successively compared with pre-stored comparison vector, user identity is confirmed according to comparison result.The space coordinate that the embodiment of the present invention passes through the characteristic point of acquisition user face, and corresponding face feature vector is obtained according to the space coordinate of each characteristic point, face feature vector and the comparison vector of pre-stored representative different user face are compared, it realizes and quickly the identification of the face of user is authenticated, and the precision of identification is high, guarantees the effect of identification.
Description
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of identification authentication method, device, system and storage to be situated between
Matter.
Background technique
With society be constantly progressive and an urgent demand of the various aspects for quickly and effectively auto authentication, biology
Feature identification technique has obtained development at full speed in recent decades.Wherein the research of face recognition technology has attracted large quantities of researchs
Person.Face recognition technology is very widely used, for example assists public security department's criminal investigation and case detection, and machine carries out authentication automatically, depending on
Frequency monitoring tracks and identifies, face facial expression analysis etc., and current many countries expand the research in relation to recognition of face, also opens
A series of face identification systems are sent out.In face recognition process, need to accumulate the relevant number of collected a large amount of facial images
According to, be continuously improved identification accuracy, the calculation amount for needing to use in this process be it is very huge, the requirement to hardware is corresponding
It can improve, cause the increased costs of investment.
And mobile phone pair has been taken the photograph very generally at present, but more purposes are to realize half-light enhancing, Night, image
Virtualization, stereoscopic shooting etc..Therefore, Image Acquisition is taken the photograph based on preceding pair, the technology for carrying out recognition of face becomes necessary.In addition,
Some special applications, such as face unlock, also start to imported into step by step in actual product.
Summary of the invention
In order to solve the above-mentioned technical problem, the embodiment of the invention provides a kind of identification authenticating parties for the embodiment of the present invention
Method, device, system and storage medium can effectively improve the accuracy of recognition of face certification.
In a first aspect, the embodiment of the invention provides a kind of identification authentication methods, comprising:
Obtain the space coordinate of all real features points in user face;
Corresponding face wire frame model is constructed according to the space coordinate of all real features points, and passes through the face
Face feature vector is calculated in grid model;
The face feature vector is successively compared with pre-stored comparison vector, user is confirmed according to comparison result
Identity.
Based on the above-mentioned technical proposal, the embodiment of the present invention can also make following improvement.
With reference to first aspect, in the first embodiment of first aspect, all features in the acquisition user face
The space coordinate of point specifically includes:
Obtain several user's face images acquired by more mesh cameras;
The pixel coordinate of all matching characteristic points of user's face image described in every width is obtained respectively;
According to the internal reference information and outer ginseng information of each separate camera in more mesh cameras and every width
The pixel coordinate of all matching characteristic points in user's face image, the space for calculating all real features points in user face are sat
Mark.
The first embodiment with reference to first aspect, in second of embodiment of first aspect,
The acquisition is specifically included by several user's face images that more mesh cameras acquire:
By it is offline double take the photograph calibrating parameters obtain more mesh cameras internal reference and outer ginseng;
After being remapped respectively by the internal reference and outer ginseng to the image that each camera of more mesh cameras obtains
It obtains several and remaps image;
It remaps image to described and cuts respectively, obtain several stereo-pictures as user's face image.
Second of embodiment with reference to first aspect, in the third embodiment of first aspect,
The pixel coordinate of all matching characteristic points for obtaining user's face image described in every width respectively specifically includes:
The fixed reference feature point in user's face image described in every width is chosen by least two characteristic point Algorithms of Selecting;
The fixed reference feature point for indicating same user's face location is retained in user's face image described in every width, as institute
State matching characteristic point;
Obtain pixel coordinate of each matching characteristic point in different user's face images.
The third embodiment with reference to first aspect, in the 4th kind of embodiment of first aspect,
The internal reference information and outer ginseng information according to each separate camera in more mesh cameras and every width
The pixel coordinate of all matching characteristic points in user's face image calculates the sky of all real features points in user face
Between coordinate specifically include:
Obtain the essential matrix and fundamental matrix of more mesh cameras;
According to the internal reference information of each separate camera, the outer ginseng information, the essential matrix and basic
Matrix constructs the projection matrix of each separate camera respectively;
Each matching according to the projection matrix of each separate camera and corresponding user's face image is special
The pixel coordinate for levying point, constructs the spatial coordinates calculation formula of each matching characteristic point respectively;
According to all spatial coordinates calculation formula for indicating same face location, the sky of the matching characteristic point is obtained
Between coordinate, the space coordinate as the real features point in the user face.
The 4th kind of embodiment with reference to first aspect, in the 5th kind of embodiment of first aspect,
Each of the projection matrix according to each separate camera and corresponding user's face image
Pixel coordinate with characteristic point, the spatial coordinates calculation formula for constructing each matching characteristic point respectively specifically include:
Each of the user's face image acquired according to the projection matrix of the separate camera and the separate camera
The pixel coordinate of matching characteristic point constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the horizontal seat of pixel of i-th of matching characteristic point of user's face image
Mark, viFor the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the first row of the projection matrix
Row vector, P2For the second every trade vector of the projection matrix, P3For the third line row vector of the projection matrix, XiIt is described
The space coordinate of i-th of matching characteristic point.
The 5th kind of embodiment with reference to first aspect, in the 6th kind of embodiment of first aspect,
All spatial coordinates calculation formula according to the same face location of expression, obtain the matching characteristic point
Space coordinate, the space coordinate as the real features point in the user face specifically includes:
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user's face
The pixel ordinate of i-th of matching characteristic point of portion's image, P1For the first row row vector of the projection matrix, P2For the throwing
Second every trade vector of shadow matrix, P3For the third line row vector of the projection matrix, ui' it is another user's face image
I-th of matching characteristic point pixel abscissa, vi' for another user's face image i-th of matching characteristic point picture
Plain ordinate, P1' for the first row row vector of the corresponding projection matrix of another user's face image, P2' for another institute
State the second every trade vector of the corresponding projection matrix of user's face image, P3' corresponded to for another user's face image
The projection matrix the third line row vector, XiFor the space coordinate of i-th of matching characteristic point;
I-th of matching characteristic point X is calculated according to the solution calculation formulaiThree-dimensional coordinate value.
The first embodiment with reference to first aspect, in the 7th kind of embodiment of first aspect,
After the pixel coordinate of all matching characteristic points for obtaining user's face image described in every width respectively, the identification
Authentication method further include:
Triangulation is carried out to all matching characteristic points of any user's face image, obtains all matching characteristic points
Topological relation, and stored;
Alternatively, described obtain in user face after the space coordinate of all real features points, the identification authentication method is also
Include:
Triangulation is carried out to real features points all in the user face, the topology for obtaining all real features points is closed
System, and stored.
The 7th kind of embodiment with reference to first aspect, in the 8th kind of embodiment of first aspect,
It is described that corresponding face wire frame model is constructed according to the space coordinate of all real features points, and by described
Face wire frame model is calculated face feature vector and specifically includes:
The face wire frame model is constructed according to the space coordinate of the topological relation and all real features points;
Calculate the normal vector in the face of each triangle gridding in the face wire frame model;All normal vectors are sought putting down
Mean value is obtained towards vector;
Calculate the Gaussian curvature of each real features point;
The face wire frame model is divided into multiple characteristic areas, by all real features points in each characteristic area
Gaussian curvature it is cumulative, obtain the feature metric of each characteristic area;
The geodesic distance between the real features point adjacent in the face wire frame model is calculated, the face is chosen
All geodesic distances in grid model in predeterminable area form the characteristic distance vector of the face wire frame model;
The face feature vector, the direction are obtained according to all feature metrics and the characteristic distance vector
Vector is the direction of the face feature vector.
With reference to first aspect first, second, third, fourth, the five, the six, the 7th, appointing in the 8th kind of embodiment
A kind of embodiment, in the 9th kind of embodiment of first aspect,
It is described that the face feature vector is successively compared with pre-stored comparison vector, confirmed according to comparison result
User identity specifically includes:
The direction with the face feature vector is obtained from the reference vector of the different directions of pre-stored each user
The reference vector to match is as comparison vector;
The face feature vector and the vector variance for comparing vector are calculated, when the vector variance is less than default threshold
When value, the corresponding user of the face feature vector compares the corresponding user of vector and matches with described.
The 9th kind of embodiment with reference to first aspect, in the tenth kind of embodiment of first aspect, the identification authenticating party
Method further include:
The face feature vector and the vector that compares are the unit vector after normalized.
Second aspect, the embodiment of the invention provides a kind of identification authentication devices, comprising:
First data processing unit, the second data processing unit and third data processing unit;
First data processing unit, for obtaining the space coordinate of all real features points in user face;
Second data processing unit, for constructing corresponding people according to the space coordinate of all real features points
Face grid model, and face feature vector is calculated by the face wire frame model;
The third data processing unit, for successively carrying out the face feature vector and pre-stored comparison vector
It compares, user identity is confirmed according to comparison result.
With reference to first aspect, in the first embodiment of first aspect, the identification authentication device further include: more mesh camera shootings
Head;
First data processing unit includes:
First obtains module, for obtaining several user's face images for passing through more mesh camera acquisitions;
Second obtains module, sits for obtaining the pixel of all matching characteristic points of user's face image described in every width respectively
Mark;
Computing module, for according to the internal reference information and outer ginseng information of each separate camera in more mesh cameras with
The pixel coordinate of all matching characteristic points in user's face image described in every width calculates all true spies in user face
Levy the space coordinate of point.
The first embodiment with reference to first aspect, in second of embodiment of first aspect, the computing module packet
It includes:
Acquisition submodule, for obtaining the essential matrix and fundamental matrix of more mesh cameras;
First computational submodule, for according to the internal reference information of each separate camera, the outer ginseng information,
The essential matrix and fundamental matrix construct the projection matrix of each separate camera respectively;
Second computational submodule, for the projection matrix and corresponding user's face according to each separate camera
The pixel coordinate of each matching characteristic point of portion's image, the spatial coordinates calculation for constructing each matching characteristic point respectively are public
Formula;
Third computational submodule, for obtaining according to all spatial coordinates calculation formula for indicating same face location
Space coordinate to the space coordinate of the matching characteristic point, as the real features point in the user face.
Second of embodiment with reference to first aspect, in the third embodiment of first aspect, described second calculates son
Module is specifically used for,
Each of the user's face image acquired according to the projection matrix of the separate camera and the separate camera
The pixel coordinate of matching characteristic point constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the horizontal seat of pixel of i-th of matching characteristic point of user's face image
Mark, viFor the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the first row of the projection matrix
Row vector, P2For the second every trade vector of the projection matrix, P3For the third line row vector of the projection matrix, XiIt is described
The space coordinate of i-th of matching characteristic point.
The third embodiment with reference to first aspect, in the 4th kind of embodiment of first aspect, the third calculates son
Module is specifically used for,
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user's face
The pixel ordinate of i-th of matching characteristic point of portion's image, P1For the first row row vector of the projection matrix, P2For the throwing
Second every trade vector of shadow matrix, P3For the third line row vector of the projection matrix, ui' it is another user's face image
I-th of matching characteristic point pixel abscissa, vi' for another user's face image i-th of matching characteristic point picture
Plain ordinate, P1' for the first row row vector of the corresponding projection matrix of another user's face image, P2' for another institute
State the second every trade vector of the corresponding projection matrix of user's face image, P3' corresponded to for another user's face image
The projection matrix the third line row vector, XiFor the space coordinate of i-th of matching characteristic point;
And i-th of matching characteristic point X is calculated according to the solution calculation formulaiThree-dimensional coordinate value.
The first embodiment with reference to first aspect, in the 5th kind of embodiment of first aspect,
Second data processing unit includes:
Modeling module, for constructing the face wire frame model according to the space coordinate of all real features points;
4th computational submodule, for calculating the normal vector in the face of each triangle gridding in the face wire frame model;It will
All normal vector averageds, obtain towards vector;
5th computational submodule, for calculating the Gaussian curvature of each real features point;By the face grid mould
Type is divided into multiple characteristic areas, and the Gaussian curvature of all real features points in each characteristic area is added up, is obtained each
The feature metric of the characteristic area;
6th computational submodule, for calculating the survey between the real features point adjacent in the face wire frame model
Ground distance chooses all geodesic distances in the face wire frame model in predeterminable area, forms the face wire frame model
Characteristic distance vector;
Vector Groups model block, for obtaining the face according to all feature metrics and the characteristic distance vector
Feature vector, it is described towards vector be the face feature vector direction.
With reference to first aspect or first aspect first, second, third, fourth, any one reality in the 5th kind of embodiment
Example is applied, in the 6th kind of embodiment of first aspect,
The third data processing unit includes:
Third obtains module, for obtaining and the people from the reference vector of the different directions of pre-stored each user
The reference vector that the direction of face feature vector matches is as comparison vector;
7th computational submodule, for calculating the face feature vector and the vector variance for comparing vector;
Judgment module is preset for being compared the vector variance with preset threshold when the vector variance is less than
When threshold value, the corresponding user of the face feature vector with described to compare the corresponding user of vector be same user.
The third aspect, the embodiment of the invention provides a kind of identification Verification Systems, comprising: memory, processor and at least
One is stored in the memory and is configured as the computer program executed by the processor, the computer program
It is configurable for executing identification authentication method provided in any one of aforementioned first aspect embodiment.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Computer program is stored in medium, the computer program can be executed by processor to realize any one of aforementioned first aspect
Identification authentication method provided in embodiment.
Above-mentioned technical proposal of the invention has the advantages that the embodiment of the present invention is used by obtaining compared with prior art
The space coordinate of the characteristic point of family face, and corresponding face feature vector is obtained according to the space coordinate of each characteristic point, it will
Face feature vector and the comparison vector of pre-stored representative different user face compare, and realize quickly to the face of user
Identification certification, and the precision identified is high, guarantees the effect of identification.
Detailed description of the invention
Fig. 1 is a kind of identification authentication method flow chart provided in an embodiment of the present invention;
Fig. 2 be another embodiment of the present invention provides a kind of identification authentication method flow chart;
Fig. 3 is a kind of identification authentication method flow chart that further embodiment of this invention provides;
Fig. 4 be further embodiment of this invention provide a kind of identification authentication method flow chart secondly;
Fig. 5 be further embodiment of this invention provide a kind of identification authentication method flow chart thirdly;
Fig. 6 be further embodiment of this invention provide a kind of identification authentication method flow chart its four;
Fig. 7 is a kind of identification authentication device structural schematic diagram provided in an embodiment of the present invention;
Fig. 8 is a kind of identification Verification System structural schematic diagram provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiments of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, the embodiment of the invention provides a kind of identification authentication methods, comprising:
S11, the space coordinate for obtaining all real features points in user face.
Specifically, space coordinate indicates the coordinate of the point in real space, real features point is only characteristic point in this step
A kind of form of presentation, it is consistent with this kind of statement of fisrt feature point, second feature point, i.e., it is all in acquisition user face to meet certain
The coordinate for the characteristic point that kind requires, the object of reference of the space coordinate can be the arbitrary objects of any position, insert space coordinate
Object of reference can be to obtain the device of user's face image as coordinate origin.
For space coordinate, it can due to object of reference difference and have different spatial values, but relative to
For family, the spacing of different real features points will not because of object of reference difference and change.
Wherein, real features point is unlimited, i.e., its specific locating face location is unlimited, and those skilled in the art can be reasonable
Selection, for example, can from the significant position of face select characteristic point, for example, real features point can for eye, nose,
The significant position such as mouth.
In this step, any feasible mode can be taken to obtain the real features point in user face, and obtained corresponding
Space coordinate, which is not limited by the present invention.For example, the space coordinate be also possible to user or other staff input,
In this step, space coordinate is got by receiving user's input.Certainly, which is also possible to pass through operation in real time
Etc. modes obtain.
For example, as shown in Fig. 2, obtaining the mode of the space coordinate of all characteristic points in user face, comprising:
S21, several user's face images acquired by more mesh cameras are obtained.
Wherein, every width user images are corresponding with a camera in more mesh cameras, i.e., every in more mesh cameras
The secondary user images of a camera acquisition one.The quantity of camera can be according to equipment scale and specific requirements etc. in more mesh cameras
It is rationally configured, which is not limited by the present invention.
For example, in this step, acquisition is specifically included by several user's face images that more mesh cameras acquire:
By it is offline double take the photograph calibrating parameters obtain more mesh cameras internal reference and outer ginseng.
Join outside video camera: determining relative positional relationship between camera coordinates and world coordinate system, video camera internal reference: determining
Projection relation of the video camera from three-dimensional space to two dimensional image, offline double calibrating parameters of taking the photograph are that camera carries out camera calibration
Process, the process can generate one group of parameter, i.e., join outside video camera internal reference and video camera, be mainly used for doing pattern distortion elimination and view
Examine alignment.
After being remapped respectively by the internal reference and outer ginseng to the image that each camera of more mesh cameras obtains
It obtains several and remaps image.
Remapping the image for exactly obtaining shooting will shoot in image according to ginseng outside video camera and video camera internal reference respectively
Each pixel be respectively mapped in the world coordinates of three-dimensional space, mapping after the completion of obtain remapping image.
It remaps image to described and cuts respectively, obtain several stereo-pictures as user's face image;It is right
The facial image remapped in image is cut, and corresponding stereo-picture is obtained.
Specifically, more mesh cameras are, for example, binocular camera, divided equally by any camera in binocular camera
It Cai Ji not a secondary user face image.
S22, obtain respectively every width user face image all matching characteristic points pixel coordinate.
Matching characteristic point is only a kind of form of presentation of characteristic point in this step, with fisrt feature point, second feature point
This kind of statement is consistent, and the pixel for obtaining the matching characteristic point and each matching characteristic point in every width user face image respectively is sat
Mark, specifically, it is special to obtain all matchings in this step in order to calculate the space coordinate of all real features points in user face
Pixel coordinate of the sign point in every width user face image, to follow-up data processing.
Specifically, in this step, further includes: carry out triangle to all matching characteristic points of any user face image and cut open
Point, the topological relation of all matching characteristic points is obtained, and stored.
Specifically, in this step, further includes: situations such as closing one's eyes or opening one's mouth, because with using when user record
When expression can have deviation, therefore, using AAM model to characteristic point carry out regularization, i.e., by the characteristic point spacing of eyes into
The positions such as row unification and mouth eyebrow carry out unification.So as to it is subsequent be compared and exclude unlock when difference expression interference.
Wherein, as shown in figure 3, the acquisition modes of specific pixel coordinate include:
S31, fixed reference feature point in every width user face image is chosen by least two characteristic point Algorithms of Selecting.
Specifically, at least two characteristic point Algorithms of Selecting include: AAM algorithm, ASM algorithm and SIFT algorithm, wherein AAM is calculated
Method and ASM algorithm are all based on the algorithm of points distribution models (Point Distribution Model, PDM), and shape is similar
The shape of particular category object be indicated by the characteristic point of several keys, SIFT algorithm, that is, Scale invariant features transform is calculated
Method is a kind of description for field of image processing.This description has scale invariability, can detect key in the picture
Point is a kind of local feature description's.Above-mentioned algorithm is the prior art, and details are not described herein.
S32, be retained in every width user face image the fixed reference feature point for indicating same user's face location, as
With characteristic point.
The point of fixed reference feature obtained in above-mentioned steps is screened, being only retained in all user's face images indicates same
The fixed reference feature point of one user's face location includes indicating same user's face location that is, in every width user face image
Reference point, using this kind of reference point as matching characteristic point, specifically, can be screened by RANSAC method, RANSAC method,
That is random sampling coherence method, is frequently used in computer vision.For example, solving a pair of of phase simultaneously in stereoscopic vision field
The match point problem of machine and the calculating of fundamental matrix, so that characteristic point selected in different user face image is in same
In level.
S33, pixel coordinate of each matching characteristic point in different user face image is obtained.
Specifically, directly obtaining each matching characteristic point according to every width user face image and scheming in corresponding user face
Pixel coordinate value as in.
S23, according to the internal reference information of each separate camera in more mesh cameras and outer ginseng information and every width user face
The pixel coordinate of all matching characteristic points in image calculates the space coordinate of all real features points in user face.
Specifically, according to the pixel coordinate of all matching characteristic points in every width user face image and shooting user face
The space coordinate of all real features points in user face is calculated, in this step in the internal reference of the camera of image and outer ginseng
Middle real features point is only a kind of form of presentation of characteristic point, consistent with fisrt feature point, this kind of statement of second feature point.
Optionally, as shown in figure 4, in the present embodiment, in S23 step, calculating all real features points in user face
The mode of space coordinate include:
S41, the essential matrix and fundamental matrix for obtaining more mesh cameras.
Specifically, it is known in those skilled in the art, when being shot by more mesh cameras, taken the photograph by offline pair
The essential matrix and fundamental matrix of more mesh cameras can be obtained in calibrating parameters, and above-mentioned matrix is the prior art, herein not
It repeats again.
S42, according to the internal reference information of each separate camera, outer ginseng information, essential matrix and fundamental matrix, construct respectively
The projection matrix of each separate camera.
Specifically, according to above-mentioned rear essential matrix and fundamental matrix, the internal reference information of camera and outer ginseng information by becoming
It changes, the projection matrix of user's face wire frame model and user's face image can be obtained, be used for subsequent processing, which is
3 × 4 matrix.
S43, according to the projection matrix of each separate camera and each matching characteristic point of corresponding user's face image
Pixel coordinate, construct the spatial coordinates calculation formula of each matching characteristic point respectively.
For example, the spatial coordinates calculation formula of each matching characteristic point is as follows:
Wherein, S is constant zoom factor, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, vi
For the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the first row row vector of projection matrix, P2To throw
Second every trade vector of shadow matrix, P3For the third line row vector of projection matrix, XiFor the space coordinate of i-th of matching characteristic point.
Above-mentioned spatial coordinates calculation formula is handled, available:
Parameter S is eliminated, then is arranged, so that it may be obtained:
To available:
Specifically, include three unknown numbers in above-mentioned space coordinate, i.e. the 3 D stereo coordinate of space coordinate, and above-mentioned ginseng
Number is by eliminating parameter S, and only remaining two groups of formula, are the specific value that can not solve space coordinate in this way, thus need by
It indicates that the spatial coordinates calculation formula of same face location is integrated, thus sky can be calculated by multiple groups calculation formula
Between include in coordinate three unknown numbers.
S44, according to all spatial coordinates calculation formula for indicating same face location, the space for obtaining matching characteristic point is sat
Mark, the space coordinate as the real features point in user face.
Specifically, calculating public affairs according to indicating that all spatial coordinates calculation formula of same face location obtain solving as follows
Formula:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user's face image
The pixel ordinate of i-th of matching characteristic point, P1For the first row row vector of projection matrix, P2For the second every trade of projection matrix
Vector, P3For the third line row vector of projection matrix, ui' for the pixel of i-th of matching characteristic point of another user's face image it is horizontal
Coordinate, vi' for another user's face image i-th of matching characteristic point pixel ordinate, P1' it is another user's face image
The first row row vector of corresponding projection matrix, P2' for the corresponding projection matrix of another user's face image the second every trade to
Amount, P3' be the corresponding projection matrix of another user's face image the third line row vector, XiFor the space of i-th of matching characteristic point
Coordinate;
I-th of matching characteristic point X is calculated according to calculation formula is solvediThree-dimensional coordinate value.
Specifically, the spatial coordinates calculation formula obtained by multiple groups camera, can be obtained corresponding solve and calculates public affairs
Formula solves the spatial value of matching characteristic point, so in the above-described embodiments, more mesh cameras are at least binocular camera,
When the quantity of camera in more mesh cameras is more, it is more accurate to finally obtain result, for example, more available multiple groups of mesh camera
Indicate the spatial coordinates calculation formula of same face location, every available space coordinate of three formula as a result, thus
Available multiple calculated results propose that error is biggish as a result, remaining calculated result is averaged in calculated result, this
The result that sample obtains can be more accurate, reduces error.
S12, corresponding face wire frame model is constructed according to the space coordinate of all real features points, and pass through face grid
Face feature vector is calculated in model.
In a particular application, most common face wire frame model is face 3D model, is to simulate real human face and design, uses
In the three dimensional space coordinate for determining face.In the present embodiment, it is constructed by the space coordinate of all obtained real features points
Face wire frame model, the grid model can be different or identical by connecting and composing between unlimited number of real features point
The model of the three dimensional space coordinate of the final expression face of the grid composition of size, calculating one by face wire frame model can be with
Face feature vector instead of the face wire frame model for calculating.
For example, as shown in figure 5, including: in such a way that face feature vector is calculated in face wire frame model
S51, the topological relation for obtaining all real features points, and face net is constructed according to the space coordinate of real features point
Lattice model.
Specifically, obtain real features point topological relation it is unlimited, such as by Delaunay Triangulation algorithm to
All real features points carry out triangulation in the face of family, wherein and Delaunay Triangulation algorithm has the minimum angle of maximization,
" closest to regularization " two features of the triangulation network and uniqueness (any 4 points cannot be concyclic), thus obtain all true
Corresponding face grid can be obtained according to the space coordinate of this topological relation and real features point in the topological relation of characteristic point
Model.
S52, the normal vector for calculating the face of each triangle gridding in face wire frame model;All normal vectors are sought average
Value, obtains towards vector.
Specifically, three points may make up a plane, in this this step, face wire frame model is pressed into real features point
Space coordinate constitute triangle gridding, acquire the normal vector in the face of each triangle gridding, the i.e. side of the triangle gridding institute direction
To by the normal vector averaged of all triangle griddings, this obtained is the direction of the face wire frame model towards vector
Direction.
S53, the Gaussian curvature for calculating each real features point.
Specifically, the Gaussian curvature of any is the product of the principal curvatures on curved surface in Differential Geometry, it is curvature
Inherence measurement, that is, its value only depends on how the distance on curved surface measures, rather than how curved surface is embedded into space, institute
With the Gaussian curvature of difference, the value of evaluation different characteristic point similarity degree may be used as.
S54, face wire frame model is divided into multiple characteristic areas, by all real features points in each characteristic area
Gaussian curvature it is cumulative, obtain the feature metric of each characteristic area.
Specifically, in this step, face wire frame model to be divided into different characteristic areas, for example, eye areas, nose
The Gaussian curvature of real features point in the region is added up, is obtained by the more apparent characteristic area such as subregion, mouth region
The case where feature metric of characteristic area, this feature metric can evaluate the Gaussian curvature in the region.
Geodesic distance in S55, calculating face wire frame model between adjacent real features point, chooses face wire frame model
All geodesic distances in middle predeterminable area form the characteristic distance vector of face wire frame model.
Specifically, the shortest distance of geodesic distance two points of expression along the surface of place object, rather than the line of two points
Section distance, in this step, the geodesic distance of two real features points, i.e. real features point is along the surface of face wire frame model
Distance, the geodesic distance can be used as the metric of the difference between different people, the geodesic distance in predeterminable area be formed special
Sign distance vector can be used as the metric for indicating the global shape of the model in the predeterminable area.
S56, face feature vector is obtained according to all feature metrics and characteristic distance vector, is that face is special towards vector
Levy the direction of vector.
Specifically, in this step, the feature metric of above-mentioned each characteristic area and characteristic distance vector are spliced,
The face feature vector for obtaining indicating the face wire frame model, represent face wire frame model direction towards vector be the face
The direction of feature vector, optionally, the element for the face feature vector that aforesaid way obtains are more, can also by characteristic distance to
All geodesic distances in amount are cumulative, obtain geodesic distance metric;The value of face feature vector can be measured by all features
Value and geodesic distance metric are summed after zooming in and out by weighted, as the characteristic value of the face wire frame model, towards to
Amount is the direction of the face wire frame model.
S13, face feature vector is successively compared with pre-stored comparison vector, user is confirmed according to comparison result
Identity.
Specifically, obtaining pre-stored comparison vector, it is to be understood that pre-stored comparison vector is different use
The face feature vector that family obtains through the above steps in advance, and stored, in this step, by the face obtained in real time spy
Sign vector is compared with pre-stored comparison vector, and the determining comparison vector to match with face feature vector can confirm
The identity of user.Based on above-mentioned steps S56, if the value of face feature vector can be weighed by all feature metrics and geodesic distance
Magnitude is summed after zooming in and out by weighted, and as the characteristic value of the face wire frame model, then comparing vector is an occurrence,
When the difference of characteristic value and the characteristic value for comparing vector within a preset range when, i.e., judge face feature vector and comparison vector phase
Match, such calculation can be improved computational efficiency, but error rate is larger.
Specifically, face feature vector is the unit vector after normalized with vector is compared in step.This
Primarily to front-rear position is different when preventing face from having record, so that the actual value of the space coordinate of conversion has difference
Deviation, normalization is a kind of mode of simplified calculating, i.e., the expression formula that will have dimension, by transformation, turns to nondimensional table
Up to formula, become scalar, it is all relatively more by the element in face feature vector in this present embodiment and comparison vector, so passing through
After vector is normalized, numeric ratio can be effectively reduced to calculation amount in the process, improves computational efficiency.
Optionally, as shown in fig. 6, in the present embodiment, in S13 step, by face feature vector and pre-stored comparison
The mode that vector is successively compared includes:
S61, it is obtained and the direction of face feature vector from the reference vector of the different directions of pre-stored each user
The reference vector to match is as comparison vector.
Specifically, as can be seen from the above embodiments, it is to be understood that pre-stored reference vector is that different user is pre-
The face feature vector that above-mentioned steps obtain is first passed through, and a face characteristic has been stored in advance in different directions in each user
Vector;In this step, the consistent reference vector in direction of face feature vector for obtaining and acquiring in real time, as compare to
Amount, it is to be understood that each user should have a reference vector in this direction.
S62, it calculates face feature vector and compares the vector variance of vector.
S63, when vector variance be less than preset threshold when, the corresponding user of face feature vector use corresponding with vector is compared
Family is same user.
Specifically, seeking variance to face feature vector and comparison vector, i.e., between face feature vector and comparison vector
Global error, when vector variance be less than preset threshold when, i.e., face feature vector matches with vector is compared, i.e. face characteristic
The corresponding user of vector user corresponding with vector is compared matches, and the value of preset threshold is within the scope of 1.000-10.000
Any fractional value retains three after decimal point.
Specifically, carrying out Face datection to the user's face image got, detection method is adopted before step S11
The detection method carried with OpenCV first carries out the Rough Inspection of face location using the haarcascade of OpenCV to input picture
It surveys and positions.If not detecting face, without three-dimensional reconstruction.
As shown in fig. 7, the embodiment of the invention provides a kind of identification authentication devices, comprising:
First data processing unit, the second data processing unit and third data processing unit;
First data processing unit, for obtaining the space coordinate of all real features points in user face;
Second data processing unit, for constructing corresponding face grid mould according to the space coordinate of all real features points
Type, and face feature vector is calculated by face wire frame model;
Third data processing unit, for face feature vector to be successively compared with pre-stored comparison vector, root
User identity is confirmed according to comparison result.
In the present embodiment, identification authentication device further include: more mesh cameras;
First data processing unit includes:
First obtains module, for obtaining several user's face images for passing through more mesh cameras and acquiring;
Second obtains module, the pixel coordinate of all matching characteristic points for obtaining every width user face image respectively;
Computing module, for according to the internal reference information of each separate camera in more mesh cameras and outer ginseng information and every width
The pixel coordinate of all matching characteristic points in user's face image, the space for calculating all real features points in user face are sat
Mark.
In the present embodiment, computing module includes:
Acquisition submodule, for obtaining the essential matrix and fundamental matrix of more mesh cameras;
First computational submodule, for according to the internal reference information of each separate camera, outer ginseng information, essential matrix and base
This matrix constructs the projection matrix of each separate camera respectively;
Second computational submodule, for according to the projection matrix of each separate camera and corresponding user's face image
The pixel coordinate of each matching characteristic point constructs the spatial coordinates calculation formula of each matching characteristic point respectively;
A third computational submodule, for according to all spatial coordinates calculation formula for indicating same face location, obtaining
Space coordinate with characteristic point, the space coordinate as the real features point in user face.
Second of embodiment with reference to first aspect, in the third embodiment of first aspect, the second computational submodule,
It is specifically used for,
According to each matching characteristic for user's face image that the projection matrix of separate camera and separate camera acquire
The pixel coordinate of point, constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, vi
For the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the first row row vector of projection matrix, P2To throw
Second every trade vector of shadow matrix, P3For the third line row vector of projection matrix, XiFor the space coordinate of i-th of matching characteristic point.
In the present embodiment, third computational submodule is specifically used for,
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user's face image
The pixel ordinate of i-th of matching characteristic point, P1For the first row row vector of projection matrix, P2For the second every trade of projection matrix
Vector, P3For the third line row vector of projection matrix, ui' for the pixel of i-th of matching characteristic point of another user's face image it is horizontal
Coordinate, vi' for another user's face image i-th of matching characteristic point pixel ordinate, P1' it is another user's face image
The first row row vector of corresponding projection matrix, P2' for the corresponding projection matrix of another user's face image the second every trade to
Amount, P3' be the corresponding projection matrix of another user's face image the third line row vector, XiFor the space of i-th of matching characteristic point
Coordinate;
And i-th of matching characteristic point X is calculated according to calculation formula is solvediThree-dimensional coordinate value.
In the present embodiment, the second data processing unit includes:
Modeling module, for constructing face wire frame model according to the space coordinate of all real features points;
4th computational submodule, for calculating the normal vector in the face of each triangle gridding in face wire frame model;To own
Normal vector averaged is obtained towards vector;
5th computational submodule, for calculating the Gaussian curvature of each real features point;Face wire frame model is divided into
The Gaussian curvature of all real features points in each characteristic area is added up, obtains each characteristic area by multiple characteristic areas
Feature metric;
6th computational submodule, for calculating the geodesic distance between real features point adjacent in face wire frame model,
All geodesic distances in face wire frame model in predeterminable area are chosen, the characteristic distance vector of face wire frame model is formed;
Vector Groups model block, for obtaining face feature vector, court according to all feature metrics and characteristic distance vector
It is the direction of face feature vector to vector.
In the present embodiment, third data processing unit includes:
Third obtains module, special with face for obtaining from the reference vector of the different directions of pre-stored each user
The reference vector that the direction of sign vector matches is as comparison vector;
7th computational submodule, for calculating face feature vector and comparing the vector variance of vector;
Judgment module, for vector variance to be compared with preset threshold, when vector variance is less than preset threshold, people
The corresponding user of face feature vector user corresponding with vector is compared is same user.
As shown in figure 8, the embodiment of the invention provides a kind of identification Verification Systems, comprising: memory, processor and at least
One is stored in memory and is configured as the computer program executed by processor, and processor executes to be stored in memory
Computer program for realizing following steps: obtain user face on all real features points space coordinate;According to all
The space coordinate of real features point constructs corresponding face wire frame model, and face characteristic is calculated by face wire frame model
Vector;Face feature vector is successively compared with pre-stored comparison vector, user identity is confirmed according to comparison result.
In the present embodiment, processor executes the computer program stored in storage and is specifically used for performing the steps of obtaining
Take several user's face images acquired by more mesh cameras;All matching characteristics of every width user face image are obtained respectively
The pixel coordinate of point;According to the internal reference information of each separate camera in more mesh cameras and outer ginseng information and every width user face
The pixel coordinate of all matching characteristic points in image calculates the space coordinate of all real features points in user face.
Preferably, it obtains and is specifically included by several user's face images that more mesh cameras acquire: being taken the photograph by offline pair
Calibrating parameters obtain more mesh cameras internal reference and outer ginseng;The image obtained to each camera of more mesh cameras passes through respectively
Internal reference and outer ginseng obtain several after being remapped and remap image;Counterweight mapping image is cut respectively, and it is vertical to obtain several
Body image is as user's face image.
Preferably, the pixel coordinate for obtaining all matching characteristic points of every width user face image respectively specifically includes: logical
It crosses at least two characteristic point Algorithms of Selecting and chooses fixed reference feature point in every width user face image;It is retained in every width user face
The fixed reference feature point that same user's face location is indicated in image, as matching characteristic point;Each matching characteristic point is obtained to exist
Pixel coordinate in different user face image.
Preferably, according to the internal reference information of each separate camera in more mesh cameras and outer ginseng information and every width user face
The pixel coordinate of all matching characteristic points in portion's image, the space coordinate for calculating all real features points in user face are specific
It include: the essential matrix and fundamental matrix for obtaining more mesh cameras;Believed according to the internal reference information of each separate camera, outer ginseng
Breath, essential matrix and fundamental matrix, construct the projection matrix of each separate camera respectively;According to the throwing of each separate camera
The pixel coordinate of each matching characteristic point of shadow matrix and corresponding user's face image, constructs each matching characteristic point respectively
Spatial coordinates calculation formula;According to all spatial coordinates calculation formula for indicating same face location, matching characteristic point is obtained
Space coordinate, the space coordinate as the real features point in user face.
Preferably, according to each matching characteristic of the projection matrix of each separate camera and corresponding user's face image
The pixel coordinate of point, the spatial coordinates calculation formula for constructing each matching characteristic point respectively specifically include:
According to each matching characteristic for user's face image that the projection matrix of separate camera and separate camera acquire
The pixel coordinate of point, constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, vi
For the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the first row row vector of projection matrix, P2To throw
Second every trade vector of shadow matrix, P3For the third line row vector of projection matrix, XiFor the space coordinate of i-th of matching characteristic point.
Preferably, according to all spatial coordinates calculation formula for indicating same face location, the sky of matching characteristic point is obtained
Between coordinate, the space coordinate as the real features point in user face specifically includes:
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user's face image
The pixel ordinate of i-th of matching characteristic point, P1For the first row row vector of projection matrix, P2For the second every trade of projection matrix
Vector, P3For the third line row vector of projection matrix, ui' for the pixel of i-th of matching characteristic point of another user's face image it is horizontal
Coordinate, vi' for another user's face image i-th of matching characteristic point pixel ordinate, P1' it is another user's face image
The first row row vector of corresponding projection matrix, P2' for the corresponding projection matrix of another user's face image the second every trade to
Amount, P3' be the corresponding projection matrix of another user's face image the third line row vector, XiFor the space of i-th of matching characteristic point
Coordinate;
I-th of matching characteristic point X is calculated according to calculation formula is solvediThree-dimensional coordinate value.
In the present embodiment, processor executes the computer program that stores in storage and is also used to perform the steps of to appointing
All matching characteristic points of one user's face image carry out triangulation, obtain the topological relation of all matching characteristic points, go forward side by side
Row storage;Alternatively, carrying out triangulation to real features points all in user face, the topology for obtaining all real features points is closed
System, and stored.
Preferably, corresponding face wire frame model is constructed according to the space coordinate of all real features points, and passes through face
Grid model is calculated face feature vector and specifically includes: according to the space coordinate structure of topological relation and all real features point
Build face wire frame model;Calculate the normal vector in the face of each triangle gridding in face wire frame model;All normal vectors are sought putting down
Mean value is obtained towards vector;Calculate the Gaussian curvature of each real features point;Face wire frame model is divided into multiple characteristic areas
The Gaussian curvature of all real features points in each characteristic area is added up in domain, and the feature for obtaining each characteristic area is measured
Value;The geodesic distance between real features point adjacent in face wire frame model is calculated, preset areas in face wire frame model is chosen
All geodesic distances in domain form the characteristic distance vector of face wire frame model;According to all feature metrics and feature away from
Descriscent measures face feature vector, is the direction of face feature vector towards vector.
In the present embodiment, processor execute the computer program that stores in storage be specifically used for performing the steps of from
Obtained in the reference vector of the different directions of pre-stored each user the reference that matches with the direction of face feature vector to
Amount is as comparison vector;It calculates face feature vector and compares the vector variance of vector, when vector variance is less than preset threshold,
The corresponding user of face feature vector user corresponding with vector is compared is same user.
To in above-described embodiment system or device provide for recording the software of the function that above-described embodiment may be implemented
The storage medium of the program code of program, and read and execute by the computer (or CPU or MPU) of system or device and be stored in
Program code in storage medium.
In this case, the function of above-described embodiment is executed from the program code itself that storage medium is read, and stored
The storage medium of program code constitutes the embodiment of the present invention.
As for providing the storage medium of program code, for example, floppy disk, hard disk, CD, magneto-optic disk, CD-ROM, CD-R,
Tape, Nonvolatile memory card, ROM, and the like can use.
The function of above-described embodiment can not only realize by executing the program code read by computer, Er Qieke
To pass through at some or all of reality of the OS (operating system) run on computers according to the instruction execution of program code
Reason operation is to realize.
In addition, being write in the program code read from storage medium the embodiment of the invention also includes such a case
Enter to be inserted into after the function expansion card of computer, or depositing of providing in the functional expansion unit being connected with computer is provided
After reservoir, CPU for including in function expansion card or functional expansion unit or the like is executed according to the order of program code
Part processing or all processing, to realize the function of above-described embodiment.
The embodiment of the invention provides a kind of computer readable storage medium, meter is stored in computer readable storage medium
Calculation machine program, computer program can be executed by processor to realize provided in any one of aforementioned first aspect embodiment
Identification authentication method.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (20)
1. a kind of identification authentication method characterized by comprising
Obtain the space coordinate of all real features points in user face;
Corresponding face wire frame model is constructed according to the space coordinate of all real features points, and passes through the face grid
Face feature vector is calculated in model;
The face feature vector is successively compared with pre-stored comparison vector, user's body is confirmed according to comparison result
Part.
2. identification authentication method according to claim 1, which is characterized in that
The space coordinate for obtaining all characteristic points in user face specifically includes:
Obtain several user's face images acquired by more mesh cameras;
The pixel coordinate of all matching characteristic points of user's face image described in every width is obtained respectively;
According to user described in the internal reference information and outer ginseng information of each separate camera in more mesh cameras and every width
The pixel coordinate of all matching characteristic points in face image calculates the space coordinate of all real features points in user face.
3. identification authentication method according to claim 2, which is characterized in that
The acquisition is specifically included by several user's face images that more mesh cameras acquire:
By it is offline double take the photograph calibrating parameters obtain more mesh cameras internal reference and outer ginseng;
It is obtained after being remapped respectively by the internal reference and outer ginseng to the image that each camera of more mesh cameras obtains
Several remap image;
It remaps image to described and cuts respectively, obtain several stereo-pictures as user's face image.
4. identification authentication method according to claim 3, which is characterized in that
The pixel coordinate of all matching characteristic points for obtaining user's face image described in every width respectively specifically includes:
The fixed reference feature point in user's face image described in every width is chosen by least two characteristic point Algorithms of Selecting;
The fixed reference feature point for indicating same user's face location is retained in user's face image described in every width, as described
With characteristic point;
Obtain pixel coordinate of each matching characteristic point in different user's face images.
5. identification authentication method according to claim 4, which is characterized in that
It is described according to the internal reference information and outer ginseng information of each separate camera in more mesh cameras and every width
The pixel coordinate of all matching characteristic points in user's face image, the space for calculating all real features points in user face are sat
Mark specifically includes:
Obtain the essential matrix and fundamental matrix of more mesh cameras;
According to the internal reference information of each separate camera, the outer ginseng information, the essential matrix and fundamental matrix,
The projection matrix of each separate camera is constructed respectively;
According to each matching characteristic point of the projection matrix of each separate camera and corresponding user's face image
Pixel coordinate, construct the spatial coordinates calculation formula of each matching characteristic point respectively;
According to all spatial coordinates calculation formula for indicating same face location, the space for obtaining the matching characteristic point is sat
Mark, the space coordinate as the real features point in the user face.
6. identification authentication method according to claim 5, which is characterized in that
Each matching of the projection matrix according to each separate camera and corresponding user's face image is special
The pixel coordinate of point is levied, the spatial coordinates calculation formula for constructing each matching characteristic point respectively specifically includes:
According to each matching for user's face image that the projection matrix of the separate camera and the separate camera acquire
The pixel coordinate of characteristic point constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, vi
For the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the projection matrix the first every trade to
Amount, P2For the second every trade vector of the projection matrix, P3For the third line row vector of the projection matrix, XiIt is described i-th
The space coordinate of matching characteristic point.
7. identification authentication method according to claim 6, which is characterized in that
All spatial coordinates calculation formula according to the same face location of expression, obtain the sky of the matching characteristic point
Between coordinate, the space coordinate as the real features point in the user face specifically includes:
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user face figure
The pixel ordinate of i-th of matching characteristic point of picture, P1For the first row row vector of the projection matrix, P2For the projection square
Second every trade vector of battle array, P3For the third line row vector of the projection matrix, ui' it is the i-th of another user's face image
The pixel abscissa of a matching characteristic point, vi' vertical sat for the pixel of i-th of matching characteristic point of another user's face image
Mark, P1' for the first row row vector of the corresponding projection matrix of another user's face image, P2' it is another user
Second every trade vector of the corresponding projection matrix of face image, P3' it is that another user's face image is corresponding described
The third line row vector of projection matrix, XiFor the space coordinate of i-th of matching characteristic point;
I-th of matching characteristic point X is calculated according to the solution calculation formulaiThree-dimensional coordinate value.
8. identification authentication method according to claim 2, which is characterized in that
After the pixel coordinate of all matching characteristic points for obtaining user's face image described in every width respectively, identification certification
Method further include:
Triangulation is carried out to all matching characteristic points of any user's face image, obtains opening up for all matching characteristic points
Relationship is flutterred, and is stored;
Alternatively, described obtain in user face after the space coordinate of all real features points, the identification authentication method further include:
Triangulation is carried out to real features points all in the user face, obtains the topological relation of all real features points,
And it is stored.
9. identification authentication method according to claim 8, which is characterized in that
It is described that corresponding face wire frame model is constructed according to the space coordinate of all real features points, and pass through the face
Grid model is calculated face feature vector and specifically includes:
The face wire frame model is constructed according to the space coordinate of the topological relation and all real features points;
Calculate the normal vector in the face of each triangle gridding in the face wire frame model;All normal vectors are sought average
Value, obtains towards vector;
Calculate the Gaussian curvature of each real features point;
The face wire frame model is divided into multiple characteristic areas, by the height of all real features points in each characteristic area
This curvature is cumulative, obtains the feature metric of each characteristic area;
The geodesic distance between the real features point adjacent in the face wire frame model is calculated, the face grid is chosen
All geodesic distances in model in predeterminable area form the characteristic distance vector of the face wire frame model;
The face feature vector is obtained according to all feature metrics and the characteristic distance vector, it is described towards vector
For the direction of the face feature vector.
10. any identification authentication method in -9 according to claim 1, which is characterized in that
It is described that the face feature vector is successively compared with pre-stored comparison vector, user is confirmed according to comparison result
Identity specifically includes:
The direction phase with the face feature vector is obtained from the reference vector of the different directions of pre-stored each user
The reference vector matched is as comparison vector;
The face feature vector and the vector variance for comparing vector are calculated, when the vector variance is less than preset threshold
When, the corresponding user of the face feature vector with described to compare the corresponding user of vector be same user.
11. identification authentication method according to claim 10, which is characterized in that the identification authentication method further include:
The face feature vector and the vector that compares are the unit vector after normalized.
12. a kind of identification authentication device characterized by comprising
First data processing unit, the second data processing unit and third data processing unit;
First data processing unit, for obtaining the space coordinate of all real features points in user face;
Second data processing unit, for constructing corresponding face net according to the space coordinate of all real features points
Lattice model, and face feature vector is calculated by the face wire frame model;
The third data processing unit, for successively comparing the face feature vector and pre-stored comparison vector
It is right, user identity is confirmed according to comparison result.
13. identification authentication device according to claim 12, which is characterized in that the identification authentication device further include: more mesh
Camera;
First data processing unit includes:
First obtains module, for obtaining several user's face images for passing through more mesh camera acquisitions;
Second obtains module, the pixel coordinate of all matching characteristic points for obtaining user's face image described in every width respectively;
Computing module, for according to the internal reference information and outer ginseng information of each separate camera in more mesh cameras with it is described
The pixel coordinate of all matching characteristic points in user's face image described in every width calculates all real features points in user face
Space coordinate.
14. identification authentication device according to claim 13, which is characterized in that the computing module includes:
Acquisition submodule, for obtaining the essential matrix and fundamental matrix of more mesh cameras;
First computational submodule, for according to the internal reference information of each separate camera, the outer ginseng information, described
Essential matrix and fundamental matrix construct the projection matrix of each separate camera respectively;
Second computational submodule, for being schemed according to the projection matrix and the corresponding user face of each separate camera
The pixel coordinate of each matching characteristic point of picture constructs the spatial coordinates calculation formula of each matching characteristic point respectively;
Third computational submodule, for obtaining institute according to all spatial coordinates calculation formula for indicating same face location
The space coordinate for stating matching characteristic point, the space coordinate as the real features point in the user face.
15. identification authentication device according to claim 14, which is characterized in that second computational submodule is specific to use
In,
According to each matching for user's face image that the projection matrix of the separate camera and the separate camera acquire
The pixel coordinate of characteristic point constructs following spatial coordinates calculation formula:
Wherein, S is constant zoom factor, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, vi
For the pixel ordinate of i-th of matching characteristic point of user's face image, P1For the projection matrix the first every trade to
Amount, P2For the second every trade vector of the projection matrix, P3For the third line row vector of the projection matrix, XiIt is described i-th
The space coordinate of matching characteristic point.
16. identification authentication device according to claim 15, which is characterized in that the third computational submodule is specific to use
In,
It obtains solving calculation formula as follows according to all spatial coordinates calculation formula for indicating same face location:
Wherein, uiFor the pixel abscissa of i-th of matching characteristic point of user's face image, viFor user face figure
The pixel ordinate of i-th of matching characteristic point of picture, P1For the first row row vector of the projection matrix, P2For the projection square
Second every trade vector of battle array, P3For the third line row vector of the projection matrix, ui' it is the i-th of another user's face image
The pixel abscissa of a matching characteristic point, vi' vertical sat for the pixel of i-th of matching characteristic point of another user's face image
Mark, P1' for the first row row vector of the corresponding projection matrix of another user's face image, P2' it is another user
Second every trade vector of the corresponding projection matrix of face image, P3' it is that another user's face image is corresponding described
The third line row vector of projection matrix, XiFor the space coordinate of i-th of matching characteristic point;
And i-th of matching characteristic point X is calculated according to the solution calculation formulaiThree-dimensional coordinate value.
17. identification authentication device according to claim 13, which is characterized in that
Second data processing unit includes:
Modeling module, for constructing the face wire frame model according to the space coordinate of all real features points;
4th computational submodule, for calculating the normal vector in the face of each triangle gridding in the face wire frame model;To own
The normal vector averaged, obtains towards vector;
5th computational submodule, for calculating the Gaussian curvature of each real features point;The face wire frame model is drawn
It is divided into multiple characteristic areas, the Gaussian curvature of all real features points in each characteristic area is added up, obtains each described
The feature metric of characteristic area;
6th computational submodule, for calculating the geodesic distance between the real features point adjacent in the face wire frame model
From choosing all geodesic distances in the face wire frame model in predeterminable area, form the feature of the face wire frame model
Distance vector;
Vector Groups model block, for obtaining the face characteristic according to all feature metrics and the characteristic distance vector
Vector, it is described towards vector be the face feature vector direction.
18. any identification authentication device in 2-17 according to claim 1, which is characterized in that
The third data processing unit includes:
Third obtains module, special with the face for obtaining from the reference vector of the different directions of pre-stored each user
The reference vector that the direction of sign vector matches is as comparison vector;
7th computational submodule, for calculating the face feature vector and the vector variance for comparing vector;
Judgment module, for the vector variance to be compared with preset threshold, when the vector variance is less than preset threshold
When, the corresponding user of the face feature vector with described to compare the corresponding user of vector be same user.
19. a kind of identification Verification System characterized by comprising memory, processor and at least one be stored in described deposit
In reservoir and it is configured as the computer program executed by the processor, the computer program is configurable for right of execution
Benefit require any one of 1 to 11 described in identification authentication method.
20. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium
Program, the computer program can be executed by processor to realize such as the described in any item identification authenticating parties of claim 1-11
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810751498.3A CN109117726A (en) | 2018-07-10 | 2018-07-10 | A kind of identification authentication method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810751498.3A CN109117726A (en) | 2018-07-10 | 2018-07-10 | A kind of identification authentication method, device, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109117726A true CN109117726A (en) | 2019-01-01 |
Family
ID=64862483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810751498.3A Withdrawn CN109117726A (en) | 2018-07-10 | 2018-07-10 | A kind of identification authentication method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109117726A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840486A (en) * | 2019-01-23 | 2019-06-04 | 深圳先进技术研究院 | Detection method, computer storage medium and the computer equipment of focus |
WO2020181900A1 (en) * | 2019-01-18 | 2020-09-17 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device and storage medium |
CN114743253A (en) * | 2022-06-13 | 2022-07-12 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
CN117894059A (en) * | 2024-03-15 | 2024-04-16 | 国网江西省电力有限公司信息通信分公司 | 3D face recognition method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070104362A1 (en) * | 2005-11-08 | 2007-05-10 | Samsung Electronics Co., Ltd. | Face recognition method, and system using gender information |
CN103065289A (en) * | 2013-01-22 | 2013-04-24 | 清华大学 | Four-ocular video camera front face reconstruction method based on binocular stereo vision |
CN103198292A (en) * | 2011-12-20 | 2013-07-10 | 苹果公司 | Face feature vector construction |
CN104091162A (en) * | 2014-07-17 | 2014-10-08 | 东南大学 | Three-dimensional face recognition method based on feature points |
CN105701455A (en) * | 2016-01-05 | 2016-06-22 | 安阳师范学院 | Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method |
CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
-
2018
- 2018-07-10 CN CN201810751498.3A patent/CN109117726A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070104362A1 (en) * | 2005-11-08 | 2007-05-10 | Samsung Electronics Co., Ltd. | Face recognition method, and system using gender information |
CN103198292A (en) * | 2011-12-20 | 2013-07-10 | 苹果公司 | Face feature vector construction |
CN103065289A (en) * | 2013-01-22 | 2013-04-24 | 清华大学 | Four-ocular video camera front face reconstruction method based on binocular stereo vision |
CN104091162A (en) * | 2014-07-17 | 2014-10-08 | 东南大学 | Three-dimensional face recognition method based on feature points |
CN105701455A (en) * | 2016-01-05 | 2016-06-22 | 安阳师范学院 | Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method |
CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
Non-Patent Citations (4)
Title |
---|
ANA BELEN MORENO等: "Face recognition using 3D surface-extracted descriptors", 《IRISH MACHINE VISION AND IMAGE PROCESSING CONFERENCE》 * |
G.G.GORDON: "Face recognition based on depth and curvature features", 《PROCEEDINGS 1992 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
张黎: "基于立体视觉的三维建筑物重建技术研究", 《万方在线》 * |
顾亦然: "三维人脸姿态校正算法研究", 《仪器仪表学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020181900A1 (en) * | 2019-01-18 | 2020-09-17 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device and storage medium |
US11538207B2 (en) | 2019-01-18 | 2022-12-27 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, image device, and storage medium |
CN109840486A (en) * | 2019-01-23 | 2019-06-04 | 深圳先进技术研究院 | Detection method, computer storage medium and the computer equipment of focus |
CN114743253A (en) * | 2022-06-13 | 2022-07-12 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
CN114743253B (en) * | 2022-06-13 | 2022-08-09 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
CN117894059A (en) * | 2024-03-15 | 2024-04-16 | 国网江西省电力有限公司信息通信分公司 | 3D face recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108334816B (en) | Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
Spreeuwers | Fast and accurate 3D face recognition: using registration to an intrinsic coordinate system and fusion of multiple region classifiers | |
CN108717531B (en) | Human body posture estimation method based on Faster R-CNN | |
CN105005755B (en) | Three-dimensional face identification method and system | |
CN109117726A (en) | A kind of identification authentication method, device, system and storage medium | |
US8711210B2 (en) | Facial recognition using a sphericity metric | |
CN106651942A (en) | Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points | |
JP4780198B2 (en) | Authentication system and authentication method | |
CN110363047A (en) | Method, apparatus, electronic equipment and the storage medium of recognition of face | |
CN110175558A (en) | A kind of detection method of face key point, calculates equipment and storage medium at device | |
JPWO2005027048A1 (en) | Object posture estimation and verification system using weight information | |
Lee et al. | A SfM-based 3D face reconstruction method robust to self-occlusion by using a shape conversion matrix | |
CN105740781A (en) | Three-dimensional human face in-vivo detection method and device | |
CN113011401B (en) | Face image posture estimation and correction method, system, medium and electronic equipment | |
CN110349152A (en) | Method for detecting quality of human face image and device | |
Yin et al. | Towards accurate reconstruction of 3d scene shape from a single monocular image | |
CN111815768B (en) | Three-dimensional face reconstruction method and device | |
JP2008176645A (en) | Three-dimensional shape processing apparatus, control method of three-dimensional shape processing apparatus, and control program of three-dimensional shape processing apparatus | |
Zhang et al. | Natural image stitching with layered warping constraint | |
Zhou et al. | Feature-preserving tensor voting model for mesh steganalysis | |
US10440350B2 (en) | Constructing a user's face model using particle filters | |
JP2014038566A (en) | Image processor | |
Zhu et al. | An occlusion compensation learning framework for improving the rendering quality of light field | |
CN113705393A (en) | 3D face model-based depression angle face recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190101 |