CN110532979A - A kind of 3-D image face identification method and system - Google Patents

A kind of 3-D image face identification method and system Download PDF

Info

Publication number
CN110532979A
CN110532979A CN201910827741.XA CN201910827741A CN110532979A CN 110532979 A CN110532979 A CN 110532979A CN 201910827741 A CN201910827741 A CN 201910827741A CN 110532979 A CN110532979 A CN 110532979A
Authority
CN
China
Prior art keywords
image
face
depth
texture
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910827741.XA
Other languages
Chinese (zh)
Inventor
姜珂
于大明
冯超
吴松
刘贵朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaxi Technology Research & Technology Co Ltd
Original Assignee
Shenzhen Huaxi Technology Research & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaxi Technology Research & Technology Co Ltd filed Critical Shenzhen Huaxi Technology Research & Technology Co Ltd
Priority to CN201910827741.XA priority Critical patent/CN110532979A/en
Publication of CN110532979A publication Critical patent/CN110532979A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of 3-D image face identification method and systems, specifically include: pre-establishing the registry of the face database under canonical reference faceform and frontal pose;It realizes that depth data is aligned with canonical reference faceform's depth image, and obtains attitude parameter;Texture image alignment is realized according to attitude parameter;Feature extraction is carried out to the depth image and texture image being aligned respectively, is calculated by depth sorting device and obtains depth similarity Sdepth, select corresponding texture classifier to calculate texture similarity S according to deflection attitude angletexture;Last recognition of face is carried out using RGB-D using the similarity data for the facial image and acquisition being finally aligned.With standard 3D face database pre-stored in advance, 3D Texture Matching is carried out, it, can recognition performance big with significant increase attitudes vibration, in the case of serious shielding to predict the face texture blocked.

Description

A kind of 3-D image face identification method and system
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of 3-D image face identification method and systems.
Background technique
Now, face recognition technology has become a kind of widely used intelligent biological identification technology, is widely used in Various fields.Three-dimensional face identification has higher discrimination compared to two-dimension human face identification and is increasingly taken seriously.It is three-dimensional Facial image is not direct picture in most cases, there are different gestures.
If improving the accuracy of different gestures human face identification, that just needs to store everyone in the database in advance Different gestures image data.Size based on database and the feasibility actually calculated, project how many a postures in advance Facial image is problem, such as defines and be divided into 5 degree between projection angle, then being to ignore in three-dimensional face configuration space Selection on the direction roll considers yaw and pitch both direction, and (pitch is rotated around X-axis, also referred to as pitch angle, yaw It is to be rotated around Y-axis, is also yaw angle, roll is rotated around Z axis, and roll angle is also.) 5 degree are divided between angle, a standard Front face image just needs to be projected out 37 × 37=1296 facial images).Projection angle interval is bigger, then for any Recognition of face under posture will be undoubtedly very big challenge, and projection angle interval is smaller, then algorithm needs to consume a large amount of storage Room and time complexity.
It is exactly that the positioning of features of human face images directly affects the alignment of face, and then influences additionally there are a problem Last discrimination, however in the prior art when human face posture deflection is bigger, positioning feature point itself is also one great , often there is positioning feature point problem bigger than normal, influences practical application in the problem of challenge
Billy et al. utilizes high speed, the RGB-D of low precision 3D data collecting instrument (Microsoft Kinect) acquisition at first Human face data handles the attitudes vibration problem in 3D recognition of face.The texture and depth information of human face data are all transformed into just Under the posture of face, then similarity calculation is carried out to texture and depth map by rarefaction representation sorting algorithm respectively, it is then similar to two Degree carries out simple fusion as final recognition result.But the identification accuracy finally realized of this method is not also and ideal.
Main abbreviation explanation:
2D: full name two-dimensional, two-dimensional imaging is indicated in this method;
3D: full name three-dimensional, three-dimensional imaging is indicated in this method;
RGB-D: full name RGB-Depth, indicate that three-dimensional imaging device obtains in this method there are also the colours of depth information Image data;
ICP: full name Iterative Closest Points, iterative closest point approach is indicated in this method;
HOG: full name Histogram of Or iented Gradient, it is one that this method, which indicates histograms of oriented gradients, Kind is used to carry out the Feature Descriptor of object detection in computer vision and image procossing.
Summary of the invention
For disadvantages described above, present invention aims at how to improve the accurate of the recognition of face of 3-D image under different gestures Property.
In order to solve problem above, the invention proposes a kind of 3-D image face identification methods, it is characterised in that:
Step 1.1 pre-establishes the registry G of the face database under canonical reference faceform and frontal pose;
Step 1.2 face image data to be identified are as follows: Q=(I, D), I represent the image data of any attitude face, D Its corresponding depth data is represented, realizes that the depth data of face image data to be identified and canonical reference faceform are realized Posture alignment realizes depth image alignment, and obtains attitude parameter;
Step 1.3 by the front face image in registry according to 1.2 obtain attitude parameter rotate to it is to be identified The identical posture of facial image realizes texture image alignment;
Step 1.4 carries out feature extraction to the depth image and texture image being aligned respectively, passes through depth sorting device meter It calculates and obtains depth similarity Sdepth, corresponding texture classifier is selected according to the deflection attitude angle for realizing facial image to be identified Calculate texture similarity Stexture
Step 1.5 depth similarity SdepthWith texture similarity StextureThe weighting for calculating facial image to be identified is similar Degree;
Step 1.6 carries out last face using RGB-D using the similarity data for the facial image and acquisition being finally aligned Identification.
The 3-D image face identification method, it is characterised in that the depth image alignment specifically includes following step It is rapid:
Step 2.1 carries out human face region shearing to the depth map of the facial image to be identified under any attitude;
Step 2.2 calculates facial image to be identified and canonical reference faceform's using ICP iteration closest approach algorithm Matching is calculated and is obtained and canonical reference faceform matching effect best spin matrix Rt and translation matrix Tt;
Step 2.3 by according to spin matrix Rt and translation matrix Tt by the depth image canonical of facial image to be identified Change, that is, generates positive depth facial image after correcting posture.
The 3-D image face identification method, it is characterised in that the human face region shearing specific implementation are as follows:
Step 3.1 detects prenasale, and prenasale is equipped with central point;
Step 3.2 is centered on prenasale, and arbitrary point (x, y, z) is if with prenasale (x0,y0,z0) Euclidean distanceRetain the point if d≤80mm to human face region, if distance d > 80mm loses Abandon the point;Successively it is cut out entire human face region.
The 3-D image face identification method, it is characterised in that the initial point for being aligned prenasale as ICP;It is described Canonical reference faceform according to by the facial image of the frontal poses without expression shape change some or all in face database, With nose point alignment, to the summation of all samples, average mode calculates acquisition again.
The 3-D image face identification method, it is characterised in that it is described using ICP iteration closest approach algorithm calculate to The facial image of identification and the matching of canonical reference faceform specifically: by facial image Q to be identified and canonical reference people Face model P recycles ICP to carry out fine alignment, P=RQ+T, R and T respectively indicate people to be identified according to the aligned in position of nose Rotation and translation matrix between face data model and canonical reference faceform obtains and canonical reference face after iteration Model Matching effect best spin matrix Rt and translation matrix Tt.
The 3-D image face identification method, it is characterised in that also increase depth letter in the depth image regularization Breath fills up operation and face information smoothing processing, specifically comprises the following steps:
The x coordinate for the depth information that correcting posture generates positive depth facial image is replaced with (- x) and is generated by step 6.1 Depth information after mirror image;
Step 6.2 fills up the depth information after correction using the depth information after mirror image, specifically fills up and passes through meter The smallest Euclidean distance between point and original depth point cloud after calculating mirror image, if the threshold value that the distance is fixed less than one, Illustrate to put loss of learning in the lesser neighborhood of coordinate originally, then among the point after retaining the mirror image to original point cloud;Until time Point after going through all mirror images;
Depth information is filled up operation acquisition image and further carries out face information smoothing processing by step 6.3.
The 3-D image face identification method, it is characterised in that by the human face data of registry G under frontal pose Texture image calculates according to spin matrix Rt and translation matrix Tt and obtains posture identical with the posture where face to be identified Posture facial image G ', G'=Rt-1G-Rt-1.Tt,R-1Indicate that spin matrix R's is inverse;Institute is finally obtained according to weak perspective projection There is the 2D face texture image of the identical posture of image.
The 3-D image face identification method, it is characterised in that by joint Bayes classifier to various postures Training sample is trained the multiple texture classifiers for generating corresponding different gestures and based on the front elevation in all training samples Divide device as training generates a depth.
The present invention provides 3-D image recognition of face method and system compared to the prior art by big posture deflect Discrimination declines obvious problem afterwards, and this method occurs what discrimination was decreased obviously there is no the increase because of posture deflection angle Situation, this also reflects the ability of this method algorithm process big posture deflection, i.e., will not deflect because of posture excessive and excessive Influence its algorithm performance;It is blocked compared to the prior art by face the problem of being affected and leading to not identification, this method benefit With standard 3D face database pre-stored in advance, 3D Texture Matching is carried out, it, can be with significant increase to predict the face texture blocked The recognition performance of attitudes vibration greatly, in the case of serious shielding.
Detailed description of the invention
Fig. 1 is 3-D image face identification method frame diagram;
Fig. 2 is the exemplary diagram for realizing depth match process;
Fig. 3 is that human body head pose defines diagram under three-dimensional system of coordinate.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
It is from Billy et al. that the way of facial image regularization to positive criteria posture in different positions is different, in addition A kind of way is that the frontal pose facial image of registry Plays is rotated to many different postures, facial image to be identified It is compared respectively with these facial images in different positions.The result shows that relative to by human face data to be identified from difference Posture is transformed into the comparison that similarity is carried out under frontal pose, is to rotate to the facial image in library and face to be identified instead The same or similar posture of data, then computer similarity, recognition performance will be got well.It will be different using the depth information in RGB-D Face and general referenced human face model under posture snap to standard front face face, to solve to cause depth by attitudes vibration Information self block and deformation problems.Input picture can be any angle, propose the three-dimensional face Attitude estimation based on ICP Method, then by under the facial image to the posture after rotation into alignment.By joint Bayesian classifier calculated in front Image is similar under certain posture to test image texture image in similarity and library under standard posture between depth image Degree.It finally merges depth image and the similarity of texture image obtains the face most like with test image.Therefore this method can Effectively to solve the problems, such as the recognition of face under the deflection of big posture, and to positioning feature point, expression, block etc. it is challenging Also good effect can be obtained on database.
If Q=(I, D) is face image data to be identified, I represents the texture image data of input any attitude face, D represents its corresponding depth data, and I and D have been realized in alignment semantically.G indicates registry, stores everyone one Positive RGB-D data.In order to find the face most like with facial image to be identified, main includes following operation:
1) similarity between each of human face data Q and registry G to be identified is calculated;
2) suitable trained classifier is selected finally to determine which is most according to information such as the similarities of calculating Similar face.
The similarity of (1) texture image is specifically included for the calculating of similarity;(2) similarity of depth image.
However, either texture image or depth image, before calculating the similarity between them, texture image and Depth image will pass through registration process respectively, i.e. calculating image is aligned with the image realization in registry.
One: the alignment for depth image, we are by realizing input with the canonical reference faceform of standard front face The alignment of any attitude facial image.
Two: and as the alignment of texture image, then it is to rotate to the facial image in registry and input facial image Identical posture realize alignment.
Three: and then the gradient orientation histogram feature of depth image and texture image after alignment is extracted, for further Similarity calculating.
Four: and then different classifiers is selected respectively on the depth image and texture image being aligned.
Five: last recognition result is realized by the weighting to depth image and texture image similarity.
A kind of 3-D image face identification method, specifically includes:
Step 1.1 pre-establishes the registry G of the face database under canonical reference faceform and frontal pose;
Step 1.2 face image data to be identified are as follows: Q=(I, D), I represent the image data of any attitude face, D Its corresponding depth data is represented, realizes that the depth data of face image data to be identified and canonical reference faceform are realized Posture alignment realizes depth image alignment, and obtains attitude parameter;
Step 1.3 by the front face image in registry according to 1.2 obtain attitude parameter rotate to it is to be identified The identical posture of facial image realizes texture image alignment;
Step 1.4 carries out feature extraction to the depth image and texture image being aligned respectively, passes through depth sorting device meter It calculates and obtains depth similarity Sdepth, corresponding texture classifier is selected according to the deflection attitude angle for realizing facial image to be identified Calculate texture similarity Stexture
Step 1.5 depth similarity SdepthWith texture similarity StextureThe weighting for calculating facial image to be identified is similar Degree;
Step 1.6 carries out last face using RGB-D using the similarity data for the facial image and acquisition being finally aligned Identification.
Fig. 1 is 3-D image face identification method frame diagram, input the depth information of facial image to be identified first and Canonical reference faceform is registrated, and is trackslipped using Symmetry and stuffing peace and change to standard front face posture.RGB texture image portion Divide then according to the attitude parameter of present image, library Plays frontal pose face is rotated into posture identical with test image.
Finally, being obtained respectively for the depth image and the training of RGB texture image that have used a large amount of trained facial images Different Bayesian (Bayes) classifiers obtained, finally merge to obtain classification results in fractional layer.(this classifier of leaf is each Classification error probability minimum or the smallest classifier of average risk in the case where previously given cost in kind classifier.It Design method is a kind of most basic statistical classification method.Its principle of classification is the prior probability by certain object, utilizes pattra leaves This formula calculates its posterior probability, i.e., the object belongs to certain a kind of probability, select to have the class of maximum a posteriori probability as Class belonging to the object.)
It introduces in detail below and how to realize depth match:
Step 1: realizing human face region shearing.
For the human face data Q=(I, D) to be identified under any attitude, the purpose of depth match is to obtain it with posture The front depth map of invariance.
Appoint to the human face data to be identified under an any attitude, to do is to be cut out face region first.Phase Than just becoming relatively easy based on the Face datection of 3D and shearing in human face region detection and shearing based on 2D facial image.
Human face region is sheared in 3D human face data and usually only needs following two step: (1) detecting prenasale;(2) with Face is cut centered on prenasale.The algorithm of automatic 3 dimension face prenasale of detection has much at present: as being had not based on positive 3D face prenasale with expression shape change detects, in general, it is assumed that highest point on point, that is, face nearest apart from video camera For prenasale.Another method proposes the detection of the automatic prenasale based on curvature.Its 3D face object handled, which can be, appoints Meaning posture.However, the premise that can not ignore of this method, which is 3D human face data, to be obtained by high-precision three-dimensional acquisition instrument .
Once knowing the position of prenasale, the face shearing based on 3D data is just easy to.Centered on prenasale, appoint Point (x, y, z) is anticipated if with prenasale (x0,y0,z0) Euclidean distanceIf d≤80mm Retain the point to human face region, the point is abandoned if distance d > 80mm.Successively it is cut out entire human face region.
Step 2: iteration closest approach is aligned.
This method selection completes non-frontal people using iteration closest approach algorithm (Iterative Closest Point, ICP) The frontization of face image.Iteration proximity pair algorithm be it is a kind of it is classical, match using very universal and very effective 3D Quasi- algorithm.
This method realizes the alignment between face depth data using ICP.By ICP algorithm it is sensitive to initial point and based on Time-consuming problem is calculated, this method is using following strategy:
(1) be aligned using prenasale as ICP initial point (it is all to be aligned facial image, coordinate origin is moved into nose The initial alignment of cusp realization prenasale);
(2) simplify cognitive phase, human face data to be identified want respectively and library in register object between alignment, this method only Human face data to be identified need to be matched with a general canonical reference faceform.
Due to the different face shapes that different objects possesses, and each test object will be with this canonical reference people Face model is aligned, therefore canonical reference faceform requires by reliable, not the high-precision 3D face of expression shape change Data composition.
This method is by the people of all frontal poses without expression shape change in the 3D face database in USF database Face image, it is average again to the summation of all samples with nose point alignment.USF database, the database are big from southern Florida It learns (USF), 1870 sequences including 122 people.Everyone walks before video camera around elliptical path, there is 5 kinds of situations of change: A/B type of shoe, band/without chest, meadow/cement floor, left/right shooting visual angle and two different periods.
And the faceform for carrying out resampling equalization to these data is 128 × 128 sizes.
First by human face data model Q to be identified and canonical reference faceform P according to the aligned in position of prenasale.Then Fine alignment is carried out using ICP.Assuming that R and T are respectively indicated between human face data model to be identified and canonical reference faceform Rotation and translation matrix.It is as follows to be formulated relationship between the two:
P=RQ+T, (1)
Whenever ICP iteration terminates, that is, have found human face data model to be identified and canonical reference faceform matching effect Best rotation and translation matrix (Rt and Tt).
Step 3: regularization is to positive depth facial image.
After human face data model to be identified and the referenced human face model fine alignment of standard front face, human face data to be identified Model will be remedied to the frontal pose of standard.
However if human face data to be identified is there are the deflection of certain posture, there may be cause to hide due to attitudes vibration Stopper divides the missing of face information.We are approximate to fill up missing information according to the symmetry of face.
Specific steps are as follows:
1) x coordinate of the depth information first after posture correction replaces the depth information after generating mirror image with (- x).
2) depth information after correction is filled up using the depth information after mirror image.If human face data to be identified is Frontal pose does not need then to fill up.If human face data to be identified is complete side, the information after mirror image requires to fill out It mends.
3) judge whether the point after mirror image needs the method filled up in original depth information to be:
The smallest Euclidean distance between point and original depth point cloud after calculating mirror image, if the distance is solid less than one Fixed threshold value illustrates to put loss of learning in the lesser neighborhood of coordinate originally, then point after retaining the mirror image to put originally cloud it In.Until traversing the point after all mirror images.
Fig. 2 is the exemplary diagram for realizing depth match process, lists two specific images and registry 00 passes through respectively: people The processes such as face shearing 11, posture correction 22, Symmetry and stuffing 33, smooth 44 obtain frontal pose depth information process process.
4th step depth information fills up operation and face information smoothing processing.
In order to remove the noise that acquisition and Symmetry and stuffing process introduce, reliable 3D face information is further obtained, we Using a disclosed smoothed code [6] to obtained face information smoothing processing.Last resampling human face region to 128 × 128 sizes, be aligned x, the direction y coordinate, retain z-depth value, so far, about human face data to be identified and be aligned with reference face The depth image of frontal pose just obtain.
It introduces in detail below and how to realize that texture image is aligned:
Step 1: calculating the projection of 2D texture image.
This method algorithm only needs to store individual front face model of each object in registry in principle, thus number According to accurate relatively easy.Since known human face data model to be identified rotates to needed for the canonical reference faceform of standard front face Rotation and translation matrix.In turn, easily the faceform under registry Plays frontal pose can be rotated to Posture where human face data to be identified.
For the faceform G under any one standard front face posture in registry, can by with down conversion by its turn Change to the model G ' with human face data model to be identified with identical posture:
G'=R-1G-R-1.T, (2)
Wherein R-1Indicate that spin matrix R's is inverse.By the standard front face pose presentation of objects all in registry according to formula (2) rotate to posture identical as human face data to be identified, then by weak perspective projection respectively obtain different objects with it is to be identified The facial image of the identical posture of human face data image.
It is noted that since all people's face model includes human face data model to be identified, it is all right in registry The faceform in faceform and training set under the frontal pose of elephant all with the canonical reference people under standard front face posture Face model carries out the fine match based on ICP algorithm.In other words, all these faceforms all with standard front face posture Canonical reference faceform is aligned, and therefore, the 2D texture image projected by the faceform that these have been aligned is still It is aligned.
Step 2: the suitable joint Bayes's classification of selection.
Before classifying using Bayesian classifier to human face data to be identified, human face data mould to be identified is extracted The texture and depth characteristic of type.This method carries out feature extraction to texture and depth image using HOG algorithm.HOG is based on vacation If the texture information of image can be described by the distribution by gradient and edge direction.HOG feature calculation speed is fast, has geometry In the Face datection being widely used with optics rotational invariance, pedestrian detection and recognition of face.
Compared to the sorting algorithms such as support vector machines (SVM), neural network (NN), k neighbour and classification tree, joint Bayesian classification [7] is easier training and when there is new object to be added, and does not need re -training classifier, applicability is more The features such as strong.
Particularly, in order to cope with the various attitudes vibrations that human face data to be identified is likely to occur, multiple Texture classifications are trained Device.Within the scope of -90 degree to the posture between+90 degree, by projecting one texture classifier of every 10 degree of intervals training.And For depth image, then the direct picture in all training libraries is trained to a depth sorting device.
Appoint to a human face data model to be identified, itself and library are calculated separately by trained texture and depth sorting device Similarity between middle object.Since texture classifier has multiple, we use according to the posture of human face data to be identified, selection Closest to the texture classifier of the posture.Then it is carried out what classifier was predicted about the similarity of texture and depth image Simple fusion:
S=λ1×Sdepth2×Stexture (3)
Wherein SdepthAnd StextureRespectively represent depth similarity and texture similarity.Parameter lambda1And λ2It, can for weight coefficient Reliability according to texture and depth image is allocated.For example, when faceform acquires from the three-dimensional acquisition device of low precision, Obtained depth information is often reliable without texture image, therefore at this moment parameter lambda1Generally higher than λ2.Finally, by similarity maximum Class test sample the most final classification.
In order to obtain facial image of the registry Plays front face image under test image posture, need to count Calculate the deflection attitude angle of human face data to be identified.And it is also required in advance when below using texture classifier progress similarity calculation Know the posture of human face data to be identified, and then selects suitable texture classifier.
Three-dimensional face Attitude estimation based on ICP.
Fig. 3 is that human body head pose defines diagram under three-dimensional system of coordinate, it is assumed that coordinate origin is directed toward horizontal in nose, x-axis Direction, y-axis are directed toward vertical direction, and z-axis is directed toward the plane formed perpendicular to x-y.The 3 d pose for defining face is (ψ, θ, φ) It respectively indicates around reference axis z-axis, y-axis, x-axis angle.Then the relationship between attitude angle (ψ, θ, φ) and spin matrix R can To indicate are as follows:
R=R (ψ) R (θ) R (φ), (4)
By spin matrix R, the calculation formula of expression (ψ, θ, φ) can be released respectively:
Wherein Ri,jIndicate that the i-th-th row and jth-th in spin matrix R arrange.
Appoint to a human face data model to be identified, it can by its matching between the referenced human face model of standard front face To obtain rotating to the rotation and translation matrix (R and T) of standard front face posture from human face data model attitude to be identified.Known rotation Torque battle array R can be easy to 3 d pose where calculating human face data to be identified according to formula (8) to (10).
Recognition of face is finally realized based on RGB-D, the alignment facial image that above-mentioned steps generate is sent to openface In face algorithm frame, that is, complete entire identification process.
The above disclosure is only one embodiment of the present invention, cannot limit this interest field certainly with this, this Field those of ordinary skill is understood that realize all or part of the process of above-described embodiment, and is made according to the claims in the present invention Equivalent variations, still fall within the range that is covered of the present invention.

Claims (10)

1. a kind of 3-D image face identification method, it is characterised in that:
Step 1.1 pre-establishes the registry G of the face database under canonical reference faceform and frontal pose;
Step 1.2 face image data to be identified are as follows: Q=(I, D), I represent the image data of any attitude face, and D is represented Its corresponding depth data realizes that the depth data of face image data to be identified and canonical reference faceform realize posture Alignment realizes depth image alignment, and obtains attitude parameter;
Step 1.3 rotates to the attitude parameter that the front face image in registry is obtained according to 1.2 and face to be identified The identical posture of image realizes texture image alignment;
Step 1.4 carries out feature extraction to the depth image and texture image being aligned respectively, is obtained by the calculating of depth sorting device Obtain depth similarity Sdepth, select corresponding texture classifier to calculate according to the deflection attitude angle for realizing facial image to be identified Texture similarity Stexture
Step 1.5 depth similarity SdepthWith texture similarity StextureCalculate the Weighted Similarity of facial image to be identified;
Step 1.6 carries out last recognition of face using RGB-D using the similarity data for the facial image and acquisition being finally aligned.
2. 3-D image face identification method according to claim 1, it is characterised in that the depth image alignment is specific Include the following steps:
Step 2.1 carries out human face region shearing to the depth map of the facial image to be identified under any attitude;
Step 2.2 calculates the matching of facial image to be identified Yu canonical reference faceform using ICP iteration closest approach algorithm, It calculates and obtains and canonical reference faceform matching effect best spin matrix Rt and translation matrix Tt;
Step 2.3 by according to spin matrix Rt and translation matrix Tt by the depth image regularization of facial image to be identified, It is exactly that positive depth facial image is generated after correcting posture.
3. 3-D image face identification method according to claim 2, it is characterised in that the human face region shearing is specific It realizes are as follows:
Step 3.1 detects prenasale, and prenasale is equipped with central point;
Step 3.2 is centered on prenasale, and arbitrary point (x, y, z) is if with prenasale (x0,y0,z0) Euclidean distanceRetain the point if d≤80mm to human face region, if distance d > 80mm loses Abandon the point;Successively it is cut out entire human face region.
4. 3-D image face identification method according to claim 3, it is characterised in that prenasale to be aligned as ICP Initial point;The canonical reference faceform is according to by the frontal poses without expression shape change some or all in face database Facial image, with nose point alignment, to the summation of all samples, average mode calculates acquisition again.
5. 3-D image face identification method according to claim 4, it is characterised in that described to use ICP iteration closest approach Algorithm calculates the matching of facial image to be identified Yu canonical reference faceform specifically: by facial image Q to be identified with Canonical reference faceform P recycles ICP to carry out fine alignment according to the aligned in position of nose, and P=RQ+T, R and T distinguish table Show the rotation and translation matrix between human face data model to be identified and canonical reference faceform, obtains and mark after iteration Quasi- referenced human face model matching effect best spin matrix Rt and translation matrix Tt.
6. 3-D image face identification method according to claim 5, it is characterised in that in the depth image regularization Also increase depth information and fill up operation and face information smoothing processing, specifically comprises the following steps:
The x coordinate for the depth information that correcting posture generates positive depth facial image is replaced with (- x) and generates mirror image by step 6.1 Depth information afterwards;
Step 6.2 fills up the depth information after correction using the depth information after mirror image, specifically fills up by calculating mirror The smallest Euclidean distance between point and original depth point cloud as after, if the threshold value that the distance is fixed less than one, explanation Loss of learning in the lesser neighborhood of coordinate is originally put, then among the point after retaining the mirror image to original point cloud;Until traversal institute Point after some mirror images;
Depth information is filled up operation acquisition image and further carries out face information smoothing processing by step 6.3.
7. 3-D image face identification method according to claim 5, it is characterised in that by registry G under frontal pose The texture image of human face data is calculated according to spin matrix Rt and translation matrix Tt and is obtained and the posture phase where face to be identified Posture the facial image G ', G'=Rt of same posture-1G-Rt-1.Tt,R-1Indicate that spin matrix R's is inverse;Finally according to weak perspective Projection obtains the 2D face texture image of the identical posture of all images.
8. 3-D image face identification method according to claim 7, it is characterised in that by combining Bayes classifier The multiple texture classifiers for generating corresponding different gestures are trained to the training sample of various postures and based on all trained samples Direct picture training in this generates a depth and divides device.
9. 3-D image face identification method according to claim 8, it is characterised in that the deflection attitude angle is based on ICP, which is calculated, to be obtained,
If coordinate origin, in nose, x-axis is directed toward horizontal direction, y-axis is directed toward vertical direction, and z-axis is directed toward puts down perpendicular to what x-y was formed Face, the 3 d pose of face are that (ψ, θ, φ) is respectively indicated around reference axis z-axis, y-axis, x-axis angle;Attitude angle (ψ, θ, φ) relationship between spin matrix R can indicate are as follows:
R=R (ψ) R (θ) R (φ),
By spin matrix R, the calculation formula of expression (ψ, θ, φ) can be released respectively:
Wherein Ri,jIndicate that the i-th-th row and jth-th in spin matrix R arrange;It is calculated according to spin matrix Rt and translation matrix Tt The 3 d pose of facial image to be identified out, final calculate obtain deflection attitude angle.
10. a kind of 3-D image face identification system, it is characterised in that using three described in claim 1 to 9 any one Tie up image face recognition method.
CN201910827741.XA 2019-09-03 2019-09-03 A kind of 3-D image face identification method and system Pending CN110532979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910827741.XA CN110532979A (en) 2019-09-03 2019-09-03 A kind of 3-D image face identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910827741.XA CN110532979A (en) 2019-09-03 2019-09-03 A kind of 3-D image face identification method and system

Publications (1)

Publication Number Publication Date
CN110532979A true CN110532979A (en) 2019-12-03

Family

ID=68666447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910827741.XA Pending CN110532979A (en) 2019-09-03 2019-09-03 A kind of 3-D image face identification method and system

Country Status (1)

Country Link
CN (1) CN110532979A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105881A (en) * 2019-12-26 2020-05-05 昆山杜克大学 Database system for 3D measurement of human phenotype
CN111814603A (en) * 2020-06-23 2020-10-23 汇纳科技股份有限公司 Face recognition method, medium and electronic device
CN111881770A (en) * 2020-07-06 2020-11-03 上海序言泽网络科技有限公司 Face recognition method and system
CN112101247A (en) * 2020-09-18 2020-12-18 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112364711A (en) * 2020-10-20 2021-02-12 盛视科技股份有限公司 3D face recognition method, device and system
CN113189601A (en) * 2020-01-13 2021-07-30 奇景光电股份有限公司 Hybrid depth estimation system
CN113743191A (en) * 2021-07-16 2021-12-03 深圳云天励飞技术股份有限公司 Face image alignment detection method and device, electronic equipment and storage medium
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113837106A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113963426A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Model training method, mask wearing face recognition method, electronic device and storage medium
CN114972634A (en) * 2022-05-06 2022-08-30 清华大学 Multi-view three-dimensional deformable human face reconstruction method based on feature voxel fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545252A (en) * 2017-08-31 2018-01-05 北京图铭视界科技有限公司 Face identification method and device in video based on multi-pose Face model
CN107729875A (en) * 2017-11-09 2018-02-23 上海快视信息技术有限公司 Three-dimensional face identification method and device
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545252A (en) * 2017-08-31 2018-01-05 北京图铭视界科技有限公司 Face identification method and device in video based on multi-pose Face model
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN107729875A (en) * 2017-11-09 2018-02-23 上海快视信息技术有限公司 Three-dimensional face identification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘润: ""基于3D模型的多姿态虚拟人脸识别算法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
叶剑华: ""三维及多模态人脸识别研究"", 《万方数据》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105881A (en) * 2019-12-26 2020-05-05 昆山杜克大学 Database system for 3D measurement of human phenotype
CN111105881B (en) * 2019-12-26 2022-02-01 昆山杜克大学 Database system for 3D measurement of human phenotype
CN113189601B (en) * 2020-01-13 2023-08-18 奇景光电股份有限公司 Hybrid depth estimation system
CN113189601A (en) * 2020-01-13 2021-07-30 奇景光电股份有限公司 Hybrid depth estimation system
CN111814603A (en) * 2020-06-23 2020-10-23 汇纳科技股份有限公司 Face recognition method, medium and electronic device
CN111814603B (en) * 2020-06-23 2023-09-05 汇纳科技股份有限公司 Face recognition method, medium and electronic equipment
CN111881770A (en) * 2020-07-06 2020-11-03 上海序言泽网络科技有限公司 Face recognition method and system
CN111881770B (en) * 2020-07-06 2024-05-31 上海序言泽网络科技有限公司 Face recognition method and system
CN112101247B (en) * 2020-09-18 2024-02-27 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112101247A (en) * 2020-09-18 2020-12-18 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112364711A (en) * 2020-10-20 2021-02-12 盛视科技股份有限公司 3D face recognition method, device and system
CN113743191B (en) * 2021-07-16 2023-08-01 深圳云天励飞技术股份有限公司 Face image alignment detection method and device, electronic equipment and storage medium
CN113743191A (en) * 2021-07-16 2021-12-03 深圳云天励飞技术股份有限公司 Face image alignment detection method and device, electronic equipment and storage medium
CN113837106A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113963426B (en) * 2021-12-22 2022-08-26 合肥的卢深视科技有限公司 Model training method, mask wearing face recognition method, electronic device and storage medium
CN113963426A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Model training method, mask wearing face recognition method, electronic device and storage medium
CN114972634A (en) * 2022-05-06 2022-08-30 清华大学 Multi-view three-dimensional deformable human face reconstruction method based on feature voxel fusion

Similar Documents

Publication Publication Date Title
CN110532979A (en) A kind of 3-D image face identification method and system
JP7523711B2 (en) Image processing device and image processing method
CN102087703B (en) The method determining the facial pose in front
EP3182373B1 (en) Improvements in determination of an ego-motion of a video apparatus in a slam type algorithm
Papazov et al. Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features
Schwarz et al. Driveahead-a large-scale driver head pose dataset
CN104573614B (en) Apparatus and method for tracking human face
Huang et al. Unsupervised joint alignment of complex images
US8467596B2 (en) Method and apparatus for object pose estimation
US7221809B2 (en) Face recognition system and method
CN111899334A (en) Visual synchronous positioning and map building method and device based on point-line characteristics
CN102043943B (en) Method and device for obtaining human face pose parameter
US20200211220A1 (en) Method for Identifying an Object Instance and/or Orientation of an Object
US8711210B2 (en) Facial recognition using a sphericity metric
CN108369473A (en) Influence the method for the virtual objects of augmented reality
KR20190098858A (en) Method and apparatus for pose-invariant face recognition based on deep learning
EP1496466A2 (en) Face shape recognition from stereo images
JP2008176645A (en) Three-dimensional shape processing apparatus, control method of three-dimensional shape processing apparatus, and control program of three-dimensional shape processing apparatus
Ali et al. Nrga: Gravitational approach for non-rigid point set registration
JP2010231350A (en) Person identifying apparatus, its program, and its method
Kotake et al. A fast initialization method for edge-based registration using an inclination constraint
CN113361400B (en) Head posture estimation method, device and storage medium
JP6770363B2 (en) Face direction estimator and its program
Lefevre et al. Structure and appearance features for robust 3d facial actions tracking
JP2990256B2 (en) Object posture change detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191203

RJ01 Rejection of invention patent application after publication