CN106778491B - The acquisition methods and equipment of face 3D characteristic information - Google Patents
The acquisition methods and equipment of face 3D characteristic information Download PDFInfo
- Publication number
- CN106778491B CN106778491B CN201611036376.3A CN201611036376A CN106778491B CN 106778491 B CN106778491 B CN 106778491B CN 201611036376 A CN201611036376 A CN 201611036376A CN 106778491 B CN106778491 B CN 106778491B
- Authority
- CN
- China
- Prior art keywords
- face
- characteristic
- points
- information
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000004458 analytical method Methods 0.000 claims abstract description 28
- 210000001508 eye Anatomy 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 10
- 210000004709 eyebrow Anatomy 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 abstract description 3
- 238000004040 coloring Methods 0.000 abstract 1
- 210000000214 mouth Anatomy 0.000 description 25
- 210000001331 nose Anatomy 0.000 description 18
- 239000013598 vector Substances 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 16
- 230000006870 function Effects 0.000 description 10
- 210000005252 bulbus oculi Anatomy 0.000 description 8
- 230000009466 transformation Effects 0.000 description 8
- 230000014509 gene expression Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 210000003467 cheek Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- NOQGZXFMHARMLW-UHFFFAOYSA-N Daminozide Chemical compound CN(C)NC(=O)CCC(O)=O NOQGZXFMHARMLW-UHFFFAOYSA-N 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 206010027145 Melanocytic naevus Diseases 0.000 description 1
- 208000007256 Nevus Diseases 0.000 description 1
- ZJPGOXWRFNKIQL-JYJNAYRXSA-N Phe-Pro-Pro Chemical compound C([C@H](N)C(=O)N1[C@@H](CCC1)C(=O)N1[C@@H](CCC1)C(O)=O)C1=CC=CC=C1 ZJPGOXWRFNKIQL-JYJNAYRXSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 210000004373 mandible Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- YXIFDERYVOQAKL-UHFFFAOYSA-N n-[4-[3,5-bis(trifluoromethyl)pyrazol-1-yl]phenyl]-4-chlorobenzamide Chemical compound N1=C(C(F)(F)F)C=C(C(F)(F)F)N1C(C=C1)=CC=C1NC(=O)C1=CC=C(Cl)C=C1 YXIFDERYVOQAKL-UHFFFAOYSA-N 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides the acquisition methods and equipment of a kind of face 3D characteristic information.Method includes the following steps: obtaining RGBD facial image;Pass through the characteristic point of RGBD man face image acquiring face;Face colour 3D grid is established according to characteristic point;According to the characteristic value of face colour 3D grid measures characteristic point and calculate the connection relationship between characteristic point;Characteristic value and connection relationship are analyzed to obtain the 3d space distribution characteristics information of characteristic point.The equipment includes that image collection module, acquisition module, grid establish module, computing module and analysis module.The present invention can obtain the 3d space distribution characteristics information of human face characteristic point, include the colouring information and depth information of face, so that the face information obtained is more comprehensive, so that recognition of face is more accurate.
Description
Technical Field
The invention relates to the technical field of face 3D feature information acquisition, in particular to a method and equipment for acquiring face 3D feature information.
Background
Information security issues have attracted widespread attention in all societies. The main approach for ensuring the information security is to accurately identify the identity of the information user and further judge whether the authority of the user for obtaining the information is legal or not according to the identification result, thereby achieving the purposes of ensuring that the information is not leaked and ensuring the legal rights and interests of the user. Therefore, reliable identification is very important and essential.
Face recognition is a biometric technology for identifying an identity based on facial feature information of a person, and the face recognition technology is receiving more and more attention as a safer and more convenient personal identification technology. The traditional face recognition technology is 2D face recognition, the 2D face recognition has no depth information, and is easily influenced by non-geometric appearance changes such as postures, expressions, illumination, facial makeup and the like, so that accurate face recognition is difficult to perform.
Disclosure of Invention
The invention provides a method and equipment for acquiring 3D characteristic information of a human face, which can solve the problem that the prior art is difficult to accurately recognize the human face.
In order to solve the technical problems, the invention adopts a technical scheme that: the method for acquiring the 3D characteristic information of the human face comprises the following steps: obtaining an RGBD face image; collecting characteristic points of a human face through the RGBD human face image; establishing a face color 3D grid according to the feature points; measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points; and analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
In the step of calculating the connection relationship between the characteristic values of the characteristic points and the characteristic points according to the face color 3D grid, the connection relationship is a topological connection relationship and a space geometric distance between the characteristic points;
and in the step of acquiring the 3D space distribution characteristic information of the characteristic points according to the characteristic values and the connection relation, performing surface deformation on the face color 3D mesh to acquire the 3D space distribution characteristic information of the face characteristic points.
In the step of calculating the connection relationship between the characteristic values of the characteristic points and the characteristic points according to the face color 3D grid, the connection relationship is dynamic connection relationship information of various combinations of the characteristic points;
and in the step of acquiring the 3D space distribution characteristic information of the characteristic points according to the characteristic values and the connection relation, acquiring the 3D space distribution characteristic information of the characteristic points by acquiring the face shape information.
In the step of collecting the feature points of the human face through the RGBD human face image, the feature points are collected by collecting human face elements, where the human face elements include: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
Wherein the characteristic values include one or more of position, distance, shape, size, angle, arc, and curvature.
In order to solve the technical problem, the invention adopts another technical scheme that: the equipment for acquiring the 3D characteristic information of the human face comprises an image acquisition module, an acquisition module, a grid establishment module, a calculation module and an analysis module; the image acquisition module is used for acquiring an RGBD face image; the acquisition module is connected with the image acquisition module and used for acquiring the characteristic points of the human face through the RGBD human face image; the grid establishing module is connected with the acquisition module and used for establishing a face color 3D grid according to the feature points; the calculation module is connected with the grid establishment module and used for measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation among the characteristic points; and the analysis module is connected with the calculation module and is used for analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
Wherein the connection relation is a topological connection relation and a space geometric distance between the feature points; the analysis module obtains the 3D space distribution characteristic information of the human face characteristic points by carrying out surface deformation on the human face color 3D mesh.
Wherein, the connection relation is dynamic connection relation information of various combinations of the characteristic points; the analysis module obtains 3D space distribution characteristic information of the characteristic points by obtaining face shape information.
The acquisition module acquires the feature points by acquiring face elements, wherein the face elements comprise: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
Wherein the characteristic values include one or more of position, distance, shape, size, angle, arc, and curvature.
The invention has the beneficial effects that: different from the prior art, the face color 3D grid is established through the characteristic points collected on the face RGBD atlas, and the characteristic values and the connection relations of the characteristic points are obtained through the face color 3D grid, so that the 3D space distribution characteristic information of the face characteristic points is obtained, and the face identification method is applied to face identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for acquiring 3D feature information of a human face according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for acquiring 3D feature information of a human face according to another embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for acquiring 3D feature information of a human face according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for acquiring 3D feature information of a human face according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus entity apparatus for acquiring 3D feature information of a human face according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for acquiring 3D feature information of a human face according to an embodiment of the present invention.
The method for acquiring the 3D characteristic information of the human face comprises the following steps:
s101: and acquiring an RGBD face image.
Specifically, the RGBD face image includes color information (RGB) and Depth information (Depth) of a face, and the RGBD face image may be obtained by a Kinect sensor. The RGBD face image is specifically an image set, and includes a plurality of RGBD images of a plurality of angles like a person.
S102: and collecting the characteristic points of the human face through the RGBD human face image.
In step S102, after acquiring the RGBD face image, collecting feature points on the RGBD face image by collecting face elements, where the face elements include: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
The feature points may be obtained by various methods, for example, by manually marking feature points of five sense organs such as eyes and a nose, cheeks, a mandible, edges thereof, and the like of the human face, or by determining the feature points of the human face by a human face feature point marking method compatible with RGB (2D).
For example, the method for locating the key feature points of the human face comprises the following steps: selecting 9 characteristic points of the human face, wherein the distribution of the characteristic points has angle invariance and is respectively 2 eyeball central points, 4 eye corner points, the middle points of two nostrils and 2 mouth corner points. On the basis of the above-mentioned identification method, the organ characteristics of human face and extended positions of other characteristic points can be easily obtained, and can be used for further identification algorithm.
When the human face features are extracted, because local edge information cannot be effectively organized, the traditional edge detection operator cannot reliably extract the features of the human face (the outlines of eyes or mouth), but from the human visual characteristics, the features of edges and angular points are fully utilized to position the key feature points of the human face, so that the reliability of the human face feature extraction is greatly improved.
Wherein a Susan (Small value Collection optimizing Nucleus) operator is selected for extracting edge and corner features of the local area. According to the characteristics of the Susan operator, the method can be used for detecting edges and extracting corners. Therefore, compared with edge detection operators such as Sobel and Canny, the Susan operator is more suitable for extracting features such as human faces, eyes and mouths and the like, and especially for automatically positioning eye corner points and mouth corner points.
The following is an introduction to the Susan operator:
traversing the image by using a circular template, if the difference between the gray value of any other pixel in the template and the gray value of the pixel (kernel) in the center of the template is less than a certain threshold, the pixel is considered to have the same (or similar) gray value with the kernel, and the region composed of pixels meeting the condition is called a kernel value similarity region (USAN). Associating each pixel in the image with a local area having similar gray values is the basis of the SUSAN criterion.
During detection, a circular template is used for scanning the whole image, the gray values of each pixel and the central pixel in the template are compared, and a threshold value is given to judge whether the pixel belongs to a USAN region, wherein the following formula is as follows:
in the formula, c (r, r)0) Is the discriminant function of pixels in the template that belong to the USAN region, I (r)0) Is the gray value of the center pixel (kernel) of the template, i (r) is the gray value of any other pixel in the template, and t is the gray difference threshold. Which affects the number of detected corner points. t is reduced and more subtle changes in the image are obtained, giving a relatively large number of detections. The threshold t must be determined based on factors such as the contrast and noise of the image. The USAN region size at a point in the image can be represented by the following equation:
wherein g is a geometric threshold, which affects the shape of the detected corner points, and the smaller g is, the sharper the detected corner points are. (1) the determination threshold g for t, g determines the maximum value of the USAN region for the output corner, i.e. a point is determined as a corner as long as the pixels in the image have a USAN region smaller than g. The size of g not only determines how many corners can be extracted from the image, but also, as previously mentioned, determines how sharp the corner is detected. So g can take a constant value once the quality (sharpness) of the desired corner point is determined. The threshold t represents the minimum contrast of the corner points that can be detected and is also the maximum tolerance for negligible noise. It mainly determines the number of features that can be extracted, the smaller t, the more features that can be extracted from an image with lower contrast, and the more features that are extracted. Therefore, for images of different contrast and noise conditions, different values of t should be taken. The SUSAN operator has the outstanding advantages of insensitivity to local noise and strong noise immunity. This is because it does not rely on the results of earlier image segmentation and avoids gradient calculations, and in addition, the USAN region is accumulated from pixels in the template with similar gray values as the central pixel of the template, which is in fact an integration process that has a good suppression of gaussian noise.
The final stage of the SUSAN two-dimensional feature detection is to find the local maximum of the initial corner response, i.e. non-maximum suppression processing, to obtain the final corner position. As the name implies, the non-maximum suppression is in the local area, if the initial response of the central pixel is the maximum in this area, its value is retained, otherwise, it is deleted, so that the maximum in the local area is obtained.
(1) Automatic positioning of the eyeball and the canthus. In the automatic positioning process of the eyeballs and the canthus, firstly, a normalized template matching method is adopted to initially position the human face. The approximate area of the face is determined in the whole face image. The general human eye positioning algorithm positions according to the valley point property of the eyes, and here, a method of combining the valley point search and the direction projection and the symmetry of the eyeballs is adopted, and the accuracy of the eye positioning can be improved by utilizing the correlation between the two eyes. Integral projection of a gradient map is carried out on the upper left part and the upper right part of the face area, a histogram of the integral projection is normalized, the approximate position of the eyes in the y direction is determined according to valley points of horizontal projection, then x is changed in a large range, valley points in the area are searched, and the detected points are used as eyeball center points of two eyes.
On the basis of obtaining the positions of two eyeballs, processing an eye region, firstly determining a threshold value by adopting a self-adaptive binarization method to obtain an automatic binarization image of the eye region, and then combining with a Susan operator, and accurately positioning inner and outer eye angular points in the eye region by utilizing an algorithm of edge and angular point detection.
The edge image of the eye region obtained by the algorithm is subjected to corner point extraction on the edge curve in the image on the basis, so that accurate positions of the inner and outer eye corner points of the two eyes can be obtained.
(2) Automatic positioning of nose area feature points. And determining the key characteristic point of the nose area of the human face as the midpoint of the central connecting line of the two nostrils, namely the center point of the nose lip. The position of the central point of the nose lip of the human face is relatively stable, and the central point of the nose lip of the human face can also play a role of a reference point when the normalization preprocessing is carried out on the human face image.
And determining the positions of the two nostrils by adopting a regional gray scale integral projection method based on the found positions of the two eyeballs.
Firstly, strip-shaped areas with the width of pupils of two eyes are intercepted, integral projection in the Y direction is carried out, and then a projection curve is analyzed. It can be seen that, searching downwards from the Y coordinate height of the eyeball position along the projection curve, finding out the position of the first valley point (by adjusting and selecting a proper peak-valley delta value, neglecting the burr influence possibly generated by the face scar or glasses and the like in the middle), and using the valley point as the Y coordinate reference point of the nostril position; in the second step, an area with the X coordinate of two eyeballs as the width and delta pixels above and below the Y coordinate of the nostrils (for example, delta is selected to be [ nostril Y coordinate-eyeball Y coordinate ] × 0.06) as the height is selected for X-direction integral projection, then a projection curve is analyzed, the X coordinate of the midpoint of the pupils of the two eyes is used as a central point, searching is respectively carried out towards the left side and the right side, and the first valley point is found to be the X coordinate of the central point of the left nostril and the right nostril. And calculating the middle points of the two nostrils to be used as the middle points of the nose and the lip, obtaining the accurate position of the middle point of the nose and the lip, and delimiting the nose area.
(3) Automatic positioning of the corners of the mouth. Because the different facial expressions may cause great change of the mouth shape, and the mouth area is easily interfered by the factors such as beard and the like, the accuracy of mouth feature point extraction has great influence on recognition. Because the positions of the mouth corner points are relatively slightly changed under the influence of expressions and the like, and the positions of the corner points are accurate, the important characteristic points of the mouth region are adopted as the positioning modes of the two mouth corner points.
On the basis of determining the characteristic points of the binocular region and the nasal region, firstly, determining a first valley point of a Y-coordinate projection curve below a nostril (in the same way, burr influence caused by beard, nevus and other factors needs to be eliminated through a proper peak-valley delta value) as a Y-coordinate position of a mouth by using a region gray scale integral projection method; then selecting a mouth region, and processing the region image by using a Susan operator to obtain a mouth edge image; and finally, extracting angular points to obtain the accurate positions of the two mouth corners.
S103: and establishing a face color 3D grid according to the feature points.
S104: and measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points.
Specifically, the color information may measure a relevant feature value for a feature point of the face feature, where the feature value includes one or more of a position, a distance, a shape, a size, an angle, an arc, and a curvature of the face feature on the 2D plane, and further includes a measure of color, brightness, texture, and the like. For example, the central pixel point of the iris extends to the periphery, so as to obtain all the pixel positions of the eye, the shape of the eye, the inclination radian of the eye corner, the color of the eye and the like.
By combining the color information and the depth information, the connection relationship between the feature points can be calculated, and the connection relationship can be the topological connection relationship and the space geometric distance between the feature points, or can also be the dynamic connection relationship information of various combinations of the feature points, and the like.
According to the measurement and calculation of the face color 3D grid, local information including plane information of each element of the face and the spatial position relation of the feature points on each element and overall information of the spatial position relation between each element can be obtained. The local information and the overall information respectively reflect the information and the structural relation hidden on the human face RGBD image from the local part and the overall part.
S105: and analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
In step S105, by analyzing the feature values and the connection relationship, the three-dimensional face shape information can be obtained, so as to obtain the 3D spatial distribution feature information of each feature point of the face, so that the face can be identified by the 3D spatial distribution feature information of the face in the later stage of face identification.
Different from the prior art, the face color 3D grid is established through the characteristic points collected on the face RGBD atlas, and the characteristic values and the connection relations of the characteristic points are obtained through the face color 3D grid, so that the 3D space distribution characteristic information of the face characteristic points is obtained to be applied to face recognition.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for acquiring 3D feature information of a human face according to another embodiment of the present invention.
S201: and acquiring an RGBD face image.
S202: and collecting the characteristic points of the human face through the RGBD human face image.
S203: and establishing a face color 3D grid according to the feature points.
S204: and measuring the characteristic values of the characteristic points according to the face color 3D grid, and calculating the topological connection relation and the space geometric distance between the characteristic points.
S205: and analyzing the characteristic values, the topological connection relation among the characteristic points and the space geometric distance by adopting a finite element analysis method to obtain the 3D space distribution characteristic information of the characteristic points.
In particular, the face color 3D mesh may be surface deformed using finite element analysis. Finite Element Analysis (FEA) is a method for simulating a real physical system (geometric and load conditions) by using a mathematical approximation method. Also with simple and interacting elements, i.e. units, a finite number of unknowns can be used to approximate a real system of infinite unknowns.
For example, after deformation energy analysis is performed on each line cell of the face color 3D mesh, a cell stiffness equation of the line cell can be established. Then, constraint units, such as point, line, tangent vector, normal vector and other constraint unit types are introduced. Because the curved surface meets the requirements of the shape, position, size, continuity with the adjacent curved surface and the like in the audit design, the curved surface is realized through constraint. The embodiment processes the constraints through a penalty function method, and finally obtains a rigidity matrix and an equivalent load array of the constraint unit.
And expanding the data structure of the deformation curve surface, so that the data structure of the deformation curve surface comprises the geometric parameter parts such as orders, control vertexes, node vectors and the like, and also comprises parameters indicating physical characteristics and external loads. Therefore, the deformation curve surface can integrally represent some complicated body representations, and the geometric model of the human face is greatly simplified. Moreover, the physical parameters and the constraint parameters in the data structure uniquely determine the configuration geometric parameters of the human face,
the deformation curve curved surface is solved by finite elements through program design, and the unit inlet program is set for different constraint units, so that any constraint unit stiffness matrix and any constraint unit load array can be calculated. And calculating the overall stiffness matrix by adopting a variable bandwidth one-dimensional array storage method according to the symmetry, banding and sparsity of the overall stiffness matrix. When the linear algebraic equation set is assembled, not only the linear unit or surface unit stiffness matrix but also the constraint unit stiffness matrix are added into the overall stiffness matrix in a 'number matching seating' mode, meanwhile, the constraint unit equivalent load array is added into the overall load array, and finally, the linear algebraic equation set is solved by adopting a Gaussian elimination method.
For example, the modeling method of the curved surface of the human face can be described by a mathematical model as follows:
the obtained deformation curve
Or curved surface
Is a solution to the extreme problem
Wherein,the energy functional function of the curved surface reflects the deformation characteristic of the curved surface to a certain extent and endows the curved surface with physical characteristics. f1, f2, f3, f4 are functions relating to the variables in (-) and,is the boundary of the parameter definition domain, and Γ' is the curve within the curved parameter domain, (μ0,v0) The method is characterized in that the method is a parameter value in a parameter domain, the condition (1) is a boundary interpolation constraint, the condition (2) is a continuity constraint at a boundary, the condition (3) is a constraint of a characteristic line in a curved surface, and the condition (4) is a constraint of an inner point of the curved surface. In application, an energy functionalTaking the following form:
the curve:
surface bending:
wherein α, β, γ respectively represent the stretch, play and distortion coefficients of the curve, and α ij and β ij respectively represent the local stretch and play coefficients of the curve in the μ, v direction at (μ, v).
It can be seen from the mathematical model that the deformation curve surface modeling method treats various constraints in a same and coordinated way, not only satisfies the local control, but also ensures the whole wide and smooth. Using the variational principle, solving the above-mentioned extremum problem can be converted to solving the following equations:
where δ represents the first order variation. Equation (5) is a differential equation, which is a numerical solution because it is complicated and difficult to find an accurate analysis result. For example, finite element methods are used for solving.
The finite element method can be considered as that firstly a proper interpolation form is selected according to the requirement, and then the combination parameters are solved, so that the obtained solution is not only a continuous form, but also the grid generated by pretreatment lays a foundation for finite element analysis.
In the recognition stage, the similarity measure between the unknown face image and the known face template is given by:
in the formula: ciXjRespectively the characteristics of the face to be recognized and the characteristics of the face in the face library, i1,i2,j1,j2,k1,k2Is a 3D mesh vertex feature. The first term in the formula is to select the corresponding local feature X in the two vector fieldsjAnd CiThe second term is to calculate the local position relationship and the matching order, so that the best match is the one with the minimum energy function.
The face color 3D grid is subjected to surface deformation by the finite element method, so that each point of the face color 3D grid is continuously close to the feature point of a real face, thereby obtaining three-dimensional face shape information and further obtaining 3D space distribution feature information of the face feature point.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for acquiring 3D feature information of a human face according to another embodiment of the present invention.
S301: and acquiring an RGBD face image.
S302: and collecting the characteristic points of the human face through the RGBD human face image.
In this embodiment, the discrete wavelet transform is used to perform face specific positioning to collect feature points of a face. Firstly, the face area is positioned, and the shape of the face is similar to an ellipse, so that the face area can be determined by using an ellipse detection algorithm, and the rotation angle of the face is obtained. The ellipse is detected to obtain parameters including the coordinates of the central point, the lengths of the long axis and the short axis, the rotation angle of the ellipse and the like, and the rotation angle can determine the rotation angle of the face.
Then, the positioning of the human face features, for example, the eyes, eyebrows, nose tip and mouth appear as horizontal features, and the eyes and mouth are positioned by the LH component, which is a low-frequency signal in the x direction and a high-frequency signal in the y direction. The iris is one of the most important characteristics of the human face, contains more information, and is positioned by selecting an original image.
(1) Eye and iris localization. The eye is positioned as an eye feature in combination with the eyebrow. If the obtained eye region completely contains the iris, the positioning is correct. After the eye region is positioned, the iris is positioned, the shape of the iris is a standard circle, and the iris is always pressed by partial persons due to the structure of the eye, so that the iris is positioned by HOUGH transformation with strong anti-interference performance.
(2) Mouth and nose positioning. The mouth and nose tip are represented as horizontal features, and the wavelet transform LH component is a line segment or an arc segment parallel to the short axis of the ellipse. The nasal alar is vertical characteristic and characteristic relatively stable, divide into two kinds in wavelet transform HL and is the line segment parallel to ellipse major axis, adopt HOUGH transform and the geometric relation among the human face characteristic, detect the line segment of mouth, nose head and nasal alar, mark in the picture.
S303: and establishing a face color 3D grid according to the feature points.
S304: and measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the dynamic connection relation information among the characteristic points.
S305: and analyzing the dynamic connection relation between the characteristic values and the characteristic points by adopting a wavelet transformation texture analysis method to obtain the 3D space distribution characteristic information of the characteristic points.
Specifically, the dynamic connection relationship is a dynamic connection relationship of various combinations of feature points. The wavelet transform is a local transform of time and frequency, has the characteristics of multi-resolution analysis, and has the capability of characterizing local characteristics of signals in a time domain and a frequency domain. In the embodiment, through wavelet transformation texture analysis, by extracting, classifying and analyzing texture features and combining human face feature values and dynamic connection relation information, specifically including color information and depth information, stereoscopic human face shape information is finally obtained, and finally human face shape information with invariance under the condition of human face subtle expression change is analyzed and extracted from the human face shape information to encode human face shape model parameters, wherein the model parameters can be used as geometric features of a human face, so that 3D space distribution feature information of human face feature points is obtained.
In the method for acquiring 3D feature information of a human face provided in some other embodiments, the method for acquiring 2D feature information of a human face is also compatible with the acquisition of 2D feature information of a human face, and the method for acquiring 2D feature information of a human face may be various methods that are conventional in the art. In the embodiments, the 3D feature information of the face is obtained, and the 2D feature information of the face is also obtained, so that the 3D and 2D recognition of the face is performed at the same time, and the accuracy of the face recognition is further improved.
For example, the basis of a three-dimensional wavelet transform is as follows:
wherein,
AJ1as a function f (x, y, z) to space V3 J1The projection operator of (a) is determined,
Qnis Hx,Hy,Hz Gx,Gy,GzA combination of (1);
let matrix H be (H)m,k),G=(Gm,k) Wherein Hx,Hy,Hzrespectively shows the effect of H on the three-dimensional signals x, y, z and Gx,Gy,GzIndicating that G acts in the x, y, z direction of the three-dimensional signal, respectively.
In the identification stage, after wavelet transformation of an unknown face image, a low-frequency low-resolution sub-image of the unknown face image is taken to be mapped to a face space, a characteristic coefficient is obtained, the distance between the characteristic coefficient to be classified and the characteristic coefficient of each person can be compared by using Euclidean distance, and a PCA algorithm is combined according to the formula:
in the formula, K is the person most matched with the unknown face, N is the number of people in the database, Y is the m-dimensional vector obtained by mapping the unknown face to the subspace formed by the characteristic faces, and Y is the m-dimensional vectorkAnd mapping the known human faces in the database to m-dimensional vectors obtained on a subspace formed by the characteristic faces.
It is understood that, in another embodiment, a 3D face recognition method based on two-dimensional wavelet features may also be used for recognition, where two-dimensional wavelet feature extraction is first required, and the two-dimensional wavelet basis function g (x, y) is defined as
gmn(x,y)=a-mng(x′,y′),a>1,m.n∈Z
Where σ is the size of the Gaussian window, a self-similar filter function can be passed through the function gmn(x, y) is obtained by appropriately expanding and rotating g (x, y). Based on the above functions, the wavelet characteristics for image I (x, y) can be defined as
The two-dimensional wavelet extraction algorithm of the face image comprises the following implementation steps:
(1) wavelet representation about a human face is obtained through wavelet analysis, and corresponding features in an original image I (x, y) are converted into wavelet feature vectors F (F epsilon R)m)。
(2) Using a small exponential polynomial (FPP) model k (x, y) ═ x.yd(d is more than 0 and less than 1) enabling m-dimensional wavelet feature space RmProjection into a higher n-dimensional space RnIn (1).
(3) Based on the kernel-linear decision analysis algorithm (KFDA), in RnBuilding an inter-class matrix S in spacebAnd intra-class matrix Sw。
Calculating SwIs orthogonal toFeature vector α1,α2,…,αn。
(4) Extracting the significant distinguishing feature vector of the face image, and changing P1 to (α)1,α2,…,αq) Wherein, α1,α2,…,αqIs SwCorresponding q eigenvectors with positive eigenvalues, q rank (S)w). ComputingEigenvectors β corresponding to the L largest eigenvalues1,β2,…,βL(L is less than or equal to c-1), wherein,c is the number of face classifications. Salient feature vector, fregular=BTP1 Ty wherein y ∈ Rn;B=(β1,β2,…,βl)。
(5) And extracting the distinguishing feature vector which is not obvious in the face image. ComputingEigenvector gamma corresponding to one maximum eigenvalue1,γ2,…,γL(L is less than or equal to c-1). Let P2=(αq+1,αq+2,…,αm) The feature vector is not distinguished
The steps included in the 3D face recognition stage are as follows:
(1) the front face is detected, and key face characteristic points, such as contour characteristic points of the face, left and right eyes, mouth and nose, and the like, in a front face and a face image are positioned.
(2) And reconstructing a three-dimensional face model through the extracted two-dimensional Gabor characteristic vector and a common 3D face database. To reconstruct a three-dimensional face model, a three-dimensional face database of human faces is used, including 100 detected face images. Each face model in the database has approximately 70000 vertices. Determining a feature transformation matrix P, wherein in the original three-dimensional face recognition method, the matrix is usually a subspace analysis projection matrix obtained by a subspace analysis method and consists of feature vectors of covariance matrices of samples corresponding to the first m maximum eigenvalues. And (3) the extracted wavelet discrimination feature vector corresponds to the feature vectors of m maximum feature values to form a main feature transformation matrix P', and the feature transformation matrix has stronger robustness on factors such as illumination, posture, expression and the like than the original feature matrix P, namely the represented features are more accurate and stable.
(3) And processing the newly generated face model by adopting a template matching and linear discriminant analysis (FLDA) method, extracting intra-class difference and inter-class difference of the model, and further optimizing the final recognition result.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an apparatus for acquiring 3D feature information of a human face according to an embodiment of the present invention.
The invention also provides equipment for acquiring the 3D characteristic information of the human face, and particularly the equipment comprises an image acquisition module 10, an acquisition module 20, a grid establishment module 30, a calculation module 40 and an analysis module 50.
The image obtaining module 10 is configured to obtain an RGBD face image.
The acquisition module 20 is connected to the image acquisition module 10, and is configured to acquire feature points of a human face through an RGBD human face image. Specifically, the acquisition module 20 acquires the feature points by acquiring face elements, wherein the face elements include: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
The grid establishing module 30 is connected to the collecting module 20, and is configured to establish a face color 3D grid according to the feature points.
The calculation module 40 is connected to the mesh establishing module 30, and is configured to measure feature values of the feature points according to the face color 3D mesh and calculate a connection relationship between the feature points. Wherein the characteristic value comprises one or more of a position, a distance, a shape, a size, an angle, an arc, and a curvature.
The analysis module 50 is connected to the calculation module 40, and is configured to analyze the feature values and the connection relationship to obtain 3D spatial distribution feature information of the feature points.
In one embodiment, the connection relationships are topological connection relationships and spatial geometric distances between feature points. The analysis module carries out surface deformation on the face color 3D mesh by a finite element analysis method to obtain 3D space distribution characteristic information of the characteristic points.
In another embodiment, the connection relationships are dynamic connection relationship information for various combinations of feature points. The analysis module combines wavelet transformation texture analysis to obtain face shape information, and then face shape model parameters are coded by extracting face shape information with invariance under the slight expression change of the face to obtain 3D space distribution characteristic information of characteristic points.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an apparatus entity device for acquiring 3D feature information of a human face according to an embodiment of the present invention. The apparatus of this embodiment can execute the steps in the method, and for related content, please refer to the detailed description in the method, which is not described herein again.
The intelligent electronic device comprises a processor 61, a memory 62 coupled to the processor 61.
The memory 62 is used for storing an operating system, a set program, an acquired RGBD face image, and 3D spatial distribution feature information of the calculated feature points.
The processor 61 is used for acquiring an RGBD face image; collecting characteristic points of a human face through the RGBD human face image; establishing a face color 3D grid according to the feature points; measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points; and analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the 3D spatial distribution feature information of the face feature points is obtained to be applied to face recognition, the 3D spatial distribution feature information includes color information and depth information, so that the face information is more comprehensive, and a face skeleton can be established through the 3D spatial distribution feature information, so that changes of non-geometric appearances of the face, such as pose, expression, illumination, face makeup, and the like, and changes of situations, such as face slimming and the like, do not affect the face recognition, and thus the face recognition can be more accurate.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A method for acquiring 3D feature information of a human face is characterized by comprising the following steps:
obtaining an RGBD face image;
collecting characteristic points of a human face through the RGBD human face image;
establishing a face color 3D grid according to the feature points;
measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points;
and analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
2. The method according to claim 1, wherein in the step of calculating the connection relationship between the feature values of the feature points and the feature points according to the face color 3D mesh, the connection relationship is a topological connection relationship and a spatial geometric distance between the feature points;
and in the step of acquiring the 3D space distribution characteristic information of the characteristic points according to the characteristic values and the connection relation, performing surface deformation on the face color 3D mesh to acquire the 3D space distribution characteristic information of the face characteristic points.
3. The method according to claim 1, wherein in the step of calculating the connection relationship between the feature values of the feature points and the feature points according to the face color 3D mesh, the connection relationship is dynamic connection relationship information of various combinations of the feature points;
and in the step of acquiring the 3D space distribution characteristic information of the characteristic points according to the characteristic values and the connection relation, acquiring the 3D space distribution characteristic information of the face characteristic points by acquiring the face shape information.
4. The method according to claim 2 or 3, wherein in the step of acquiring the feature points of the human face through the RGBD human face image, the characteristic points are acquired through acquiring human face elements, wherein the human face elements comprise: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
5. The method of claim 4, wherein the characteristic values include one or more of position, distance, shape, size, angle, arc, and curvature.
6. An apparatus for acquiring 3D feature information of a human face, comprising:
the image acquisition module is used for acquiring an RGBD face image;
the acquisition module is connected with the image acquisition module and used for acquiring the characteristic points of the human face through the RGBD human face image;
the grid establishing module is connected with the acquisition module and used for establishing a face color 3D grid according to the characteristic points;
the calculation module is connected with the grid establishment module and used for measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation among the characteristic points;
and the analysis module is connected with the calculation module and used for analyzing the characteristic values and the connection relation to acquire the 3D space distribution characteristic information of the characteristic points.
7. The apparatus according to claim 6, wherein the connection relationship is a topological connection relationship and a spatial geometric distance between the feature points;
the analysis module obtains the 3D space distribution characteristic information of the human face characteristic points by carrying out surface deformation on the human face color 3D mesh.
8. The apparatus according to claim 6, wherein the connection relation is dynamic connection relation information of various combinations of the feature points;
the analysis module obtains 3D space distribution characteristic information of the human face characteristic points by obtaining human face shape information.
9. The apparatus according to claim 7 or 8, wherein the acquisition module performs the acquisition of the feature points by acquiring face elements, wherein the face elements comprise: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
10. The apparatus of claim 9, wherein the characteristic values comprise one or more of position, distance, shape, size, angle, arc, and curvature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611036376.3A CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611036376.3A CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106778491A CN106778491A (en) | 2017-05-31 |
CN106778491B true CN106778491B (en) | 2019-07-02 |
Family
ID=58971120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611036376.3A Active CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106778491B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481186B (en) * | 2017-08-24 | 2020-12-01 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN108629278B (en) * | 2018-03-26 | 2021-02-26 | 奥比中光科技集团股份有限公司 | System and method for realizing information safety display based on depth camera |
CN108888487A (en) * | 2018-05-22 | 2018-11-27 | 深圳奥比中光科技有限公司 | A kind of eyeball training system and method |
CN113033387A (en) * | 2021-03-23 | 2021-06-25 | 金哲 | Intelligent assessment method and system for automatically identifying chronic pain degree of old people |
-
2016
- 2016-11-14 CN CN201611036376.3A patent/CN106778491B/en active Active
Non-Patent Citations (1)
Title |
---|
《基于RGB-D数据的三维人脸建模及标准化》;傅泽华;《中国优秀硕士学位论文全文数据库 信息科技辑,2016年第01期,I138-516页》;20160115;第2-3,17-21,23,31-33页 |
Also Published As
Publication number | Publication date |
---|---|
CN106778491A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778468B (en) | 3D face identification method and equipment | |
Dutagaci et al. | Evaluation of 3D interest point detection techniques via human-generated ground truth | |
US10049262B2 (en) | Method and system for extracting characteristic of three-dimensional face image | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
Bronstein et al. | Three-dimensional face recognition | |
Berretti et al. | Matching 3D face scans using interest points and local histogram descriptors | |
WO2017219391A1 (en) | Face recognition system based on three-dimensional data | |
CN106778474A (en) | 3D human body recognition methods and equipment | |
Berretti et al. | Automatic facial expression recognition in real-time from dynamic sequences of 3D face scans | |
CN106599785B (en) | Method and equipment for establishing human body 3D characteristic identity information base | |
Puhan et al. | Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density | |
Guo et al. | EI3D: Expression-invariant 3D face recognition based on feature and shape matching | |
CN108182397B (en) | Multi-pose multi-scale human face verification method | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN106778491B (en) | The acquisition methods and equipment of face 3D characteristic information | |
Russ et al. | 3D facial recognition: a quantitative analysis | |
Bastias et al. | A method for 3D iris reconstruction from multiple 2D near-infrared images | |
Jahanbin et al. | Passive three dimensional face recognition using iso-geodesic contours and procrustes analysis | |
CN102163343B (en) | Three-dimensional model optimal viewpoint automatic obtaining method based on internet image | |
Ming et al. | A unified 3D face authentication framework based on robust local mesh SIFT feature | |
Bouchemha et al. | A robust technique to characterize the palmprint using radon transform and Delaunay triangulation | |
Boukamcha et al. | 3D face landmark auto detection | |
Mian et al. | 3D face recognition | |
Berretti et al. | 3D partial face matching using local shape descriptors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee after: Obi Zhongguang Technology Group Co., Ltd Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee before: SHENZHEN ORBBEC Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |