WO2013091304A1 - 基于几何图像中中频信息的三维人脸识别方法 - Google Patents

基于几何图像中中频信息的三维人脸识别方法 Download PDF

Info

Publication number
WO2013091304A1
WO2013091304A1 PCT/CN2012/071728 CN2012071728W WO2013091304A1 WO 2013091304 A1 WO2013091304 A1 WO 2013091304A1 CN 2012071728 W CN2012071728 W CN 2012071728W WO 2013091304 A1 WO2013091304 A1 WO 2013091304A1
Authority
WO
WIPO (PCT)
Prior art keywords
intermediate frequency
face
frequency information
information image
model
Prior art date
Application number
PCT/CN2012/071728
Other languages
English (en)
French (fr)
Inventor
达飞鹏
王朝阳
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Priority to KR1020137001007A priority Critical patent/KR101314131B1/ko
Priority to US14/364,280 priority patent/US9117105B2/en
Publication of WO2013091304A1 publication Critical patent/WO2013091304A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the invention relates to a three-dimensional face recognition method based on intermediate frequency information in a geometric image, and performs mesh parameterization and linear interpolation on any preprocessed three-dimensional face model to obtain a geometric image, and uses a multi-scale Haar wavelet filter to
  • the IF information image with identity discrimination is extracted from the geometric image as the expression invariant feature of the 3D face model.
  • the similarity degree of the intermediate frequency information image of the test model and the library set model is calculated by the wavelet domain structured similarity algorithm to determine the test model.
  • Identity The intermediate frequency information image of the three-dimensional face model proposed by the invention has good identity representation, and effectively reduces the influence of expression changes on three-dimensional face recognition.
  • the wavelet domain structured similarity algorithm accurately calculates the structural information similarity of the intermediate frequency information image of the test model and the library set model, and significantly improves the recognition rate of the 3D face recognition method.
  • Biometrics has important applications in the security field. Especially compared with feature recognition technologies such as fingerprints and irises, automatic face recognition technology is increasingly used for its advantages of non-contact, high acceptability and good concealment. Concerns, there is huge room for development.
  • 3D face recognition technology is limited by factors such as illumination, posture, and makeup, and 3D face recognition technology can overcome or mitigate the adverse effects of these factors.
  • the 3D face model has more information than the 2D image, which is a more accurate description of the true form of the face.
  • the 3D face model has a large amount of data, many interference areas, and a large amount of computation, and the non-rigid deformation caused by facial expression affects the performance of the 3D face recognition method based on geometric information. Therefore, how to reduce the amount of calculation and reduce the influence of facial expression becomes the bottleneck of 3D face recognition technology, which is also a key issue for research.
  • the invention provides a three-dimensional face recognition method based on intermediate frequency information in a geometric image, which can improve the recognition rate.
  • the invention adopts the following technical solutions:
  • a three-dimensional face recognition method based on intermediate frequency information in geometric images characterized in that multi-scale Haar wavelet filtering is performed on the geometric images of the test model and the library set model respectively, and horizontal IF information images of the test model and the library set model are obtained.
  • the vertical intermediate frequency information image and the diagonal intermediate frequency information image are calculated by the wavelet domain structured similarity algorithm to calculate the similarity of the corresponding intermediate frequency information image and added as the total similarity of the test model and the library set model, and finally according to the test face and The similarity of the faces of each library set in the 3D face database set, and the library set model with the largest similarity is determined as the recognition The result.
  • the processing includes a pre-processing step, an intermediate frequency information image extraction step, a wavelet domain structured similarity calculation step, and an identification step.
  • Step 1 Preprocess the test model and the library set model respectively.
  • the preprocessing is:
  • the position of the tip of the nose is determined.
  • the point is the center of the sphere, the radius of 90mm is used as the ball, and the point falling outside the sphere is discarded, and the point in the sphere is reserved and used as a follow-up treatment.
  • Step 1.2 Smoothing the surface of the face
  • the face cloud of the cut face is corrected by Principal Component Analysis (PCA).
  • PCA Principal Component Analysis
  • Three principal axis directions are obtained by principal component analysis.
  • the nose point is used as the origin, and the largest feature value is selected.
  • the feature vector is used as the axis, and the feature vector corresponding to the minimum feature value is used as the Z axis, and the right hand coordinate system is established, and the right hand coordinate system is used as the spatial three-dimensional coordinate system.
  • Each point in the face point cloud is represented by x, y in the coordinate system.
  • the z coordinate is uniquely represented;
  • the point cloud of the face is uniformly sampled according to the spatial distance, the sampling interval is lmm, the diluted point cloud is obtained, the diluted point cloud is triangularly meshed, and each spatial triangular surface in the generated three-dimensional face mesh is calculated and saved.
  • the side lengths ⁇ ⁇ , ⁇ ⁇ , ⁇ ⁇ , / 1,2,...,;;, where;; are the number of triangular patches in the grid, and the average of the sides of all triangular patches is f, If there are edges with a length greater than 4f in the triangular patch, the triangular patches are discarded, and the vertices of the triangular patches are retained;
  • Step 2.1 Map the point cloud coordinate information of the test model and the library set model face to the plane respectively, and form the geometric image of the test model and the library set model respectively.
  • the method for obtaining the geometric image is as follows:
  • Step 2.1.1 Grid parameterization Map the boundary points of the preprocessed 3D face mesh to the four sides of the regular quadrilateral of 512 x 512 pixels on the plane, and divide the 3D face mesh except the boundary point.
  • the other points are mapped into the regular quadrilateral region through the grid parameterization, and the plane mesh ⁇ is obtained, and any vertex of the regular quadrilateral on the plane is taken as the origin, and the two intersect with the origin.
  • the direction of the edge is the positive direction, and the counterclockwise coordinate system MON is established. Any point on the plane is uniquely represented by the m and n coordinates.
  • b On the four sides of the regular quadrilateral, b points are uniformly sampled from the origin in the counterclockwise direction, and the sampling point
  • Step 2.2 separately filter the geometric image G of the test model and the library set model to obtain the intermediate frequency information of the test model and the library set model.
  • the filtering of the geometric image adopts the following methods:
  • Step 2.2.1 Perform multi-scale Hal Haar wavelet filtering on the geometric image G Step 2.2.1.1 Use the Haar transformation matrix H 4 1 1
  • the geometric image G is sequentially subjected to row transformation and column transformation 1 -1 to obtain a set of low frequency coefficients and a set of horizontal, vertical and diagonal high frequency coefficients, and the set of low frequency coefficients is denoted as L, horizontal, vertical and diagonal.
  • HLp LH BHH 1 collection line frequency coefficients are referred to as HLp LH BHH 1; following the procedure of step 2.2.1.2 2.2.1.1, low frequency Haar wavelet coefficients for filtering again, the output of the low frequency coefficients and the second set of horizontal filtering, vertical, for The set of angular high frequency coefficients are recorded as LL 2 , HL 2 , LH 2 and HH 2 respectively , and the loop filtering is performed 5 times, and the low frequency coefficient set obtained by filtering the previous time is filtered as an input, and the new low frequency coefficient set and level are output. , vertical, diagonal high frequency coefficient sets;
  • Step 2.2.2 Extracting the intermediate frequency information image
  • Step 3 uses the wavelet domain structured similarity algorithm to calculate the similarity between the test model and the library set model respectively. The calculation method is as follows:
  • Step 3 Calculate the similarity of the horizontal intermediate frequency information image of the test model and the horizontal intermediate frequency information image of the library set model.
  • HL the similarity H of the vertical intermediate frequency information image of the test model and the vertical intermediate frequency information image of the library set model, the test model
  • the similarity of the diagonal intermediate frequency information image and the diagonal intermediate frequency information image of the library set model ⁇ ⁇ , will ⁇ ⁇ ,
  • S LH and S HH are added as the similarity between the test model and the library set model, and the ⁇ ⁇ , S LH , and S HH respectively use the horizontal intermediate frequency information image to be matched, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image.
  • the wavelet domain structured similarity algorithm is used, and the wavelet domain structured similarity algorithm is:
  • Step 3.1.1 According to the horizontal IF information image, the vertical intermediate frequency information image and the three attributes of x, y, z of each pixel of the diagonal intermediate frequency information image, the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the pair respectively
  • the X attributes of all the pixels of the angular intermediate frequency information image are arranged in the order of the pixels to which they belong, and constitute the X channel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image, respectively, and so on, respectively, and are horizontally constructed.
  • the IF channel and the z channel of the IF information image and the vertical IF information image are recorded as:
  • is, or Z, denoted as X-channel, y ⁇ channel or channels, as ⁇ ( ⁇ 1 of the first pixel row, the first row of elements 2 1 ( ⁇ in column 2, ......, (as in ⁇
  • the element of the second row and the first column, "16, 16 is the element of the 16th row and the 16th column, and the horizontal intermediate frequency information image, the vertical intermediate frequency information image or the diagonal intermediate frequency information image is referred to as an intermediate frequency information image, and is calculated.
  • C to represent the x, y or z channel of an IF information image of the test model, C represents the same channel of the corresponding intermediate frequency information image of the library set model, where
  • g is from the library set model, with "and; 5 means ( ⁇ and the number of rows of elements and columns ⁇ -1, ⁇ -1 C a
  • ⁇ , ⁇ - ⁇ L ⁇ , ⁇ ⁇ , + ⁇ represents 3x3 pixel adjacent S ss
  • the domain, the element c a g P is the central element of the 3 ⁇ 3 pixel neighborhood, and the structural similarity of ⁇ and c a g p (aH
  • the invention proposes a three-dimensional face recognition method for calculating the similarity between the test model and the library set model by using the wavelet domain structured similarity algorithm as the expression change feature in the three-dimensional face recognition.
  • the face information described by the 3D face model includes three parts: the overall information representing the appearance of the face contour, the detailed information representing the features of the face itself, and the noise information representing the fine texture of the face surface.
  • the outline appearance that is, the overall information will be greatly deformed
  • the feature of the face itself that is, the detail information will not change. So we choose the details of the 3D face as the expression of the expression, put the test The matching of the model and the library set model is transformed into a match between the expression invariant features.
  • the present invention maps a three-dimensional face model to a geometric image, and then uses a multi-scale Haar wavelet filter to decompose the geometric image into sub-images containing information of different frequency bands, wherein the information in the middle frequency band corresponds to the detailed information, which will include the information of the middle frequency band.
  • the sub-image is extracted as an invariant feature of the three-dimensional human face, and is called an intermediate frequency information image.
  • the similarity degree of the intermediate frequency information image of the test model and the library set model is calculated by the wavelet domain structured similarity algorithm. The similarity between the test model and each library set model of the 3D face database set is compared, and the library set with the largest similarity is determined.
  • the model belongs to the same individual as the test model.
  • the invention Converts the shape information of the three-dimensional face to the frequency domain, and extract the intermediate frequency information image as an expression invariant feature.
  • the invention converts the face information into the frequency domain by the multi-scale Haar wavelet filter, and decomposes the frequency domain information into frequency bands that do not overlap each other, wherein the information of the low frequency band corresponds to the overall information of the human face, and the information of the middle frequency band corresponds to The details of the face, the information of the high frequency band corresponds to the noise information. Since the detail information of the face has a high degree of identity discrimination and the expression is invariant, the present invention extracts the sub-image including the intermediate frequency information as a feature of three-dimensional face recognition, and is called an intermediate frequency information image.
  • the multi-scale Hal wavelet filter can generate intermediate frequency information images in horizontal, vertical and diagonal directions, wherein the horizontal intermediate frequency information image reflects the edge information of the face in the horizontal direction, reflecting the horizontal features of the eyes and the mouth; the vertical intermediate frequency The information image contains the edge information of the vertical direction of the face, which embodies the vertical features such as the nose of the face; the diagonal intermediate frequency information image maintains the edge information of the diagonal direction of the face.
  • the horizontal, vertical and diagonal intermediate frequency information images are used together as the invariant features of the three-dimensional human face, which can comprehensively capture and represent the detailed information of the three-dimensional human face, and has strong expression robustness.
  • the wavelet domain structural similarity algorithm is used to calculate the similarity.
  • the wavelet domain structure similarity algorithm is a generalization and improvement of the structural similarity algorithm in the wavelet domain. This algorithm inherits the advantages of the structural similarity algorithm and is more suitable for the similarity calculation of the intermediate frequency information image in the wavelet domain.
  • the structural similarity algorithm quantitatively calculates the structural information difference of the image to be matched according to the way the human visual system senses the image.
  • the invention calculates the structural similarity of the horizontal, vertical and diagonal intermediate frequency information images of the test model and the library set model in the wavelet domain, and judges the identity of the test face according to the sum of the similarities of the intermediate frequency information images.
  • the present invention separately calculates the intermediate frequency information images of the test model.
  • the local structural similarity of the pixel corresponding to the corresponding intermediate frequency information image of the library set model is the same, and finally the average value of each local similarity is used as the similarity of the corresponding intermediate frequency information image.
  • the wavelet domain structural similarity algorithm used in the present invention can obtain the recognition result conforming to the perception habit of the human visual system, and improves the recognition accuracy of the three-dimensional face recognition system to some extent.
  • FIG. 1 is a flow chart of a three-dimensional face recognition method according to the present invention
  • Figure 3 is a smooth upper half face
  • Figure 4 is the upper half of the face after the point cloud is diluted.
  • Figure 5 is a parametric grid
  • Figure 6 is a grayscale image of a geometric image.
  • Figure 7 is a schematic diagram of multi-scale wavelet filtering
  • Figure 8 is a horizontal, vertical and diagonal intermediate frequency information image
  • Figure 9 is a schematic diagram of the identification method
  • Figure 10 is a color map of a geometric image
  • the programming implementation tool uses Matlab R2009a.
  • the experimental data comes from the FRGC v2.0 3D face database, which is collected by the University of Notre Dame.
  • the database includes 4,007 3D face models of 466 people, mainly collected in the fall of 2003 and spring 2004.
  • each person's first three-dimensional face is used as a library set model, and the rest are used as test models;
  • FIG. 1 is a flow chart of a three-dimensional face recognition method according to the present invention.
  • Figure 5 is a parametric grid.
  • the pre-processed 3D face mesh is meshed and mapped to a 2D mesh of 512 x 512 pixels on the plane.
  • Figure 6 is the gray of the geometric image.
  • Degree map the three-dimensional coordinates of the face mesh vertices are attached to the corresponding vertices of the parametric mesh, and then the linear interpolation method is used to determine the properties of each pixel in the regular quadrilateral region, and a two-dimensional geometric image with three-dimensional coordinate attributes is obtained. That is, the geometric image of the face, this figure shows the geometric image in the form of a grayscale image;
  • Figure 7 is a schematic diagram of multi-scale Hal wavelet filtering.
  • the geometric transformation is sequentially performed by row transformation and column transformation using a Haar transformation matrix to obtain a set of low-frequency coefficients and a set of high-frequency coefficients in horizontal, vertical and diagonal directions.
  • the set of low-frequency coefficients is again subjected to Haar wavelet filtering, outputting a new set of low-frequency coefficients and new horizontal and vertical
  • Figure 8 is a horizontal, vertical, and diagonal intermediate frequency information image.
  • FIG. 9 is a schematic diagram of the identification method for a test model and a set of model library, with each test model similarity calculation model library set, judged to have the test model library model set the maximum similarity Belong to the same individual;
  • Figure 10 is a color diagram of a geometric image. Each pixel of the geometric image has three-dimensional coordinates X, y, and Z attributes. This figure uses the x, y, and z attributes as the RGB attributes of the color image, and displays the geometric image as a color map. This figure is the same as Figure 6. Geometric image.
  • the processing steps for the test model and the library set model include a pre-processing step, an intermediate frequency information image extraction step, a wavelet domain structure similarity calculation step, and an identification step.
  • Step 1 preprocesses the test model and the library set model respectively, and the preprocessing is:
  • the position of the tip of the nose is determined according to the shape index and the geometric constraint of the shape point cloud of the face point cloud.
  • the shape index 5 (M) of any point tv on the face point cloud is determined by its maximum principal curvature / ⁇ ( ) and the minimum principal curvature /r 2 ( ):
  • the shape index feature can indicate the degree of convexity and concavity in the neighborhood where a point is located. The more convex shape corresponds to the shape index.
  • the centroid position of the face point cloud is calculated, and a connected area closest to the centroid position is selected as the nose tip area in the nose tip candidate area. Select the center of mass of the tip of the nose as the tip of the nose.
  • the ball With the tip of the nose as the center of the ball, the ball is made of a radius of 90 mm, and the point falling outside the sphere is discarded, and the point inside the sphere is reserved as the face area for subsequent processing.
  • Step 1.2 Smoothing the surface of the face
  • the face cloud of the cut face is corrected by Principal Component Analysis (PCA) method.
  • PCA Principal Component Analysis
  • Three principal axis directions are obtained by principal component analysis.
  • the nose point is used as the origin, and the largest feature value is selected.
  • the feature vector is used as the axis, and the feature vector corresponding to the minimum feature value is used as the Z axis, and the right hand coordinate system is established, and the right hand coordinate system is used as the spatial three-dimensional coordinate system.
  • Each point in the face point cloud is represented by x, y in the coordinate system.
  • the z coordinate is uniquely represented.
  • the face point cloud in the spatial 3D coordinate system is projected onto the XOY plane, and then the 2D meshing operation of the projected point cloud is performed, and the surface reconstruction is performed by the 2.5-dimensional meshing algorithm of the point cloud to obtain an approximate representation of the face surface.
  • V w w v.
  • Step 1.3 Cutting the upper half of the face
  • the spatial sampling method is used to dilute the points of the upper half of the face. This method of diluting data is simple and effective, and the number of points can be reduced without distortion, and a point where the space is relatively uniform can be obtained.
  • the spatial separation distance ⁇ of the present invention is lmm.
  • the specific dilution method is as follows:
  • the diluted upper half of the face model is again triangulated to generate 7; triangular faces.
  • Calculate and save the edge lengths of each spatial triangle in the generated 3D face mesh ⁇ , ⁇ ⁇ , ⁇ , 1 1, 2, -, ⁇ , all triangle faces
  • the average value of the side length of the sheet is f. If there are edges with a length greater than 4f in the triangular patch, the triangular patch is discarded, and the vertices of the triangular patch are retained.
  • the preprocessing process transforms the test model and the library set model into a three-dimensional face mesh with the same smoothness and intensity.
  • Step 2.1 Map the point cloud coordinate information of the test model and the library set model face to the plane respectively, and form the geometric image of the test model and the library set model respectively.
  • the method for obtaining the geometric image is as follows:
  • the plane mesh ⁇ is obtained, and any vertex of the regular quadrilateral on the plane is taken as the origin, and the direction of the two sides intersecting the origin is the positive direction, and the counterclockwise coordinate system MON is established, and any point on the plane is m.
  • the n coordinate is uniquely represented.
  • b points are uniformly sampled from the origin in the counterclockwise direction, and the coordinates of the sampling point are
  • Step 2.1.2 Generate a geometric image
  • the face mesh vertices / 3 ⁇ 4 (x 3 ⁇ 4, 3 3 ⁇ 4) additional three-dimensional coordinates of the corresponding point ( ") on a point (" ⁇ ) attributes, and then determining the linear interpolation of each pixel within the area of a regular tetragon
  • the attribute of the point gives a two-dimensional image with a three-dimensional coordinate attribute called a geometric image G.
  • Step 2.2 Filter the geometric image G of the test model and the library set model respectively, and obtain the intermediate frequency information of the test model and the library set model.
  • the filtering of the geometric image adopts the following methods:
  • Step 2.1.1.1 Divide the geometric image G of size 512x512 pixels into blocks of 2x2 pixels size, Haar wavelet filtering is applied to each block by the Haar transformation matrix H. Use AG
  • o u , a 12 , a 21 , and ⁇ 22 are elements in the block.
  • o u , a 12 , a 21 , and ⁇ 22 are elements in the block.
  • the row transformation and column transformation are performed on A in turn:
  • a 21 a 22 ⁇ is the block after Haar wavelet filtering, which is the low frequency approximation coefficient of block ⁇ , 2 is the horizontal high frequency component of block ⁇ , i is the vertical high frequency component of block A, 2 is The diagonal high frequency component of block A.
  • the low-frequency approximation coefficients of all the blocks are arranged in the order of the block to which they belong, forming a set ZA of low-frequency coefficients, and the horizontal high-frequency components are arranged in the order of the block to which they belong.
  • the set of horizontal high frequency coefficients HL 1 ; the vertical high frequency components are arranged in the order of the block to which they belong, and the set of vertical high frequency coefficients of the water is arranged.
  • the diagonal high frequency components are arranged in the order of the block to which they belong, forming a diagonal high frequency.
  • the Haar wavelet filtering is performed again on the low frequency coefficient set, and the low frequency coefficient set and the horizontal, vertical and diagonal high frequency coefficient sets of the second filtering are output, and are respectively recorded as LL 2 , HL 2 , LH 2 and HH 2 .
  • the loop filtering is performed 5 times in this way, each time filtering the low-frequency coefficient set of the previous filtered output as an input, and outputting a new low-frequency coefficient set and a horizontal, vertical, and diagonal high-frequency coefficient set.
  • Step 2.2.2 Extracting the intermediate frequency information image
  • the 3D face mesh is converted into a horizontal and vertical intermediate frequency information image and a diagonal intermediate frequency information image, and the matching of the test model and the library set model is converted into a matching of the corresponding intermediate frequency information image.
  • Step 3 Calculate the similarity between the test model and the library set model by using the wavelet domain structured similarity algorithm.
  • the calculation method is as follows:
  • Step 3.1 Calculate the similarity S HL of the horizontal IF information image of the test model and the horizontal IF information image of the library set model, the similarity H of the vertical IF information image of the test model and the vertical IF information image of the library set model, test model The similarity between the diagonal intermediate frequency information image and the diagonal intermediate frequency information image of the library set model ⁇ ⁇ , will ⁇ ⁇ ,
  • S LH and S HH are added as the similarity between the test model and the library set model, and the ⁇ ⁇ , S LH , and S HH respectively use the horizontal intermediate frequency information image to be matched, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image.
  • the wavelet domain structured similarity algorithm is used, and the wavelet domain structured similarity algorithm is:
  • Step 3.1.1 According to the three attributes of X, y, Z of each pixel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the diagonal intermediate frequency information image, the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the pair respectively
  • the X attributes of all the pixels of the angular intermediate frequency information image are arranged in the order of the pixels to which they belong, and constitute the X channel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image, respectively, and so on, respectively, and are horizontally constructed.
  • the y channel and z channel of the intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image are recorded as:
  • x, y or z expressed as x channel, y channel or z channel
  • is (the first line of the ⁇ , c 1 2 is the element of the first column and the second column of ⁇ , ..., c 2 1 is the element of the 2nd row and 1st column of ⁇
  • "16,16 is the element of the 16th row and the 16th column
  • the horizontal intermediate frequency information image, the vertical intermediate frequency information image or the diagonal intermediate frequency information image is called
  • the similarity S w , constructive or S , the ⁇ , s are obtained by the following methods:
  • Use C to represent the x, y or z channel of an IF information image of the test model, Representing the same channel of the corresponding intermediate frequency information image of the library set model, where, p table
  • g represents the model from the library set, with ⁇ and ( ( ) and the number of rows and columns of the elements in the ⁇ , in the 3x3 pixel neighborhood, the element is ( ⁇
  • element c a g P is the central element of the 3 ⁇ 3 pixel neighborhood in , , and the structural similarity of ⁇ and c a g p (aH

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于几何图像中中频信息的三维人脸识别方法,步骤如下:(1)对三维人脸的库集模型和测试模型进行预处理,包括三维人脸区域切割、平滑处理和点云稀释,最后切除嘴部附近点集,保留上半张人脸;(2)通过网格参数化将上半张人脸映射至二维网格,对网格顶点的三维坐标线性插值获取各像素点的三维坐标属性,生成三维人脸模型的几何图像;(3)用多尺度哈尔小波滤波器对几何图像进行多尺度滤波,提取水平中频信息图像、垂直中频信息图像以及对角线中频信息图像作为人脸的表情不变特征;(4)用小波域结构化相似度算法计算测试模型和库集模型的相似度;(5)根据测试模型和三维人脸库集的各个库集模型的相似度,判定具有最大相似度的库集模型与测试模型属于同一个体。

Description

基于几何图像中中频信息的三维人脸识别方法 技术领域
本发明涉及一种基于几何图像中中频信息的三维人脸识别方法, 对任一预处理过的 三维人脸模型进行网格参数化和线性插值得到几何图像, 用多尺度哈尔小波滤波器从几 何图像中提取具有身份区分度的中频信息图像作为三维人脸模型的表情不变特征, 用小 波域结构化相似度算法计算测试模型和库集模型的中频信息图像的相似度以判定测试模 型的身份。 本发明提出的三维人脸模型的中频信息图像具有很好的身份表征性, 有效地 减小了表情变化对三维人脸识别造成的影响。 小波域结构化相似度算法精确地计算了测 试模型和库集模型的中频信息图像的结构信息相似度, 显著地提高了三维人脸识别方法 的识别率。
背景技术
生物特征识别在安全领域有着重要的应用, 特别是与指纹、 虹膜等特征识别技术相 比, 自动人脸识别技术以其无接触性、 可接受性高、 隐蔽性好等优点受到越来越多的关 注, 有着巨大的发展空间。
传统的基于照片的人脸识别技术受到光照、 姿态、 化妆等因素的限制, 而三维人脸 识别技术可以克服或减轻这些因素带来的不利影响。 三维人脸模型具有比二维图像更丰 富的信息, 它是对人脸的空间真实形态更准确的描述。 但是, 三维人脸模型数据量较大, 干扰区域较多, 计算量极大, 且人脸表情带来的非刚性形变影响了基于几何信息的三维 人脸识别方法的性能。 因此, 如何减小运算量、 降低人脸表情影响成为三维人脸识别技 术的瓶颈, 也是研究的关键问题。
发明内容
本发明提供一种能够提高识别率的基于几何图像中中频信息的三维人脸识别方法。 本发明采用如下技术方案:
一种基于几何图像中中频信息的三维人脸识别方法, 其特征在于, 分别对测试模型 及库集模型的几何图像进行多尺度哈尔小波滤波, 得到测试模型和库集模型的水平中频 信息图像、 垂直中频信息图像和对角线中频信息图像, 用小波域结构化相似度算法计算 对应中频信息图像的相似度并相加作为测试模型和库集模型的总相似度, 最后根据测试 人脸和三维人脸库集的各个库集人脸的相似度, 判定具有最大相似度的库集模型为识别 结果。 所述处理包括预处理步骤、 中频信息图像提取步骤, 小波域结构化相似度计算步 骤以及识别步骤。
步骤 1 分别对测试模型和库集模型进行预处理, 所述的预处理为:
步骤 1.1 人脸切割
根据人脸点云的形状指数 Shape Index特征和几何约束确定鼻尖点位置, 以该点为球 心, 90mm为半径做球, 舍弃落在球体以外的点, 保留球体内的点并作为后续处理的人脸 区域;
步骤 1.2 人脸表面平滑处理
对切割后的人脸点云用主成分分析法(Principal Component Analysis, PCA)进行姿态校 正, 经主成分分析得到 3个互相垂直的主轴方向, 以鼻尖点为原点, 选取最大的特征值 对应的特征向量作为 轴, 最小特征值对应的特征向量作为 Z轴, 建立右手坐标系, 并 以所述右手坐标系为空间三维坐标系, 人脸点云中每个点由坐标系中 x、 y、 z坐标唯一 表示;
对空间三维坐标系中的人脸点云三角化, 得到空间三角网格, 然后用基于网格的平滑 算法对人脸区域进行平滑去噪, 经过 10次迭代处理, 得到表面平滑的三维人脸网格; 步骤 1.3上半张脸的切割
舍弃三维人脸网格中位于 y = -10平面以下的点,保留受表情影响较小的上半张人脸; 步骤 1.4人脸点云稀释
人脸的点云按照空间距离进行均匀采样, 采样间隔为 lmm, 得到稀释的点云, 对稀 释的点云进行三角网格化, 计算并保存生成的三维人脸网格中每个空间三角面片的边长 γΆ , γη , γη , / = 1,2,…,;;, 其中;;是网格中三角面片的个数, 所有三角面片边长的平 均值为 f, 如果三角面片中存在有长度大于 4f的边, 则舍弃三角面片, 保留三角面片的 顶点;
步骤 2.1 分别将测试模型和库集模型人脸的点云坐标信息映射到平面上,分别形成测 试模型和库集模型的几何图像, 获取几何图像的方法如下:
步骤 2.1.1网格参数化 将预处理后的三维人脸网格的边界点映射到平面上大小为 512 x 512像素的正四边 形的四条边上, 且将三维人脸网格除边界点以外的其它点经过网格参数化映射到正四边 形区域内, 得到平面网格 ^, 以平面上正四边形任一顶点为原点, 以与原点相交的两条 边所在方向为正方向, 建立逆时针坐标系 MON, 平面上的任意点由 m、 n坐标唯一表示, 在正四边形的四条边上, 按照逆时针方向从原点开始均匀采样 b个点, 采样点的坐标为 t = \X-b , 其中 b是三维人脸网格的边界点个数; 记/ ¾为三维人脸网格的顶点, = 1,2,···, r, r为顶点个数, 映射到正四边形区域内的 对应点的坐标为(《^, ;), mq、 是以下线性方程的解:
Figure imgf000005_0001
0 0 W > D ·>
q = =V/¾B
其中 ·是三维人脸网格的拉普拉斯 Laplacian矩阵, 是三维人脸网格边界点的集合; 步骤 2.1.2 生成几何图像
将人脸网格顶点/ ¾ =(x¾,) ¾)的三维坐标附加在对应点(《 )上, 作为点(《^, )的 属性, 再用线性插值方法确定正四边形区域内各像素点的属性, 得到具有三维坐标属性 的二维图像, 称为几何图像 G;
步骤 2.2 分别对测试模型和库集模型的几何图像 G滤波,得到测试模型和库集模型的 中频信息, 几何图像的滤波采用如下方法:
步骤 2.2.1对几何图像 G进行多尺度哈尔 Haar小波滤波 步骤 2.2.1.1利用哈尔转换矩阵 H 4 1 1
对几何图像 G依次进行行变换和列变 1 -1 换, 得到低频系数的集合以及水平、 垂直和对角线高频系数的集合, 将低频系数的集合 记为 L , 水平、 垂直和对角线高频系数的集合分别记为 HLp LH BHH1 ; 步骤 2.2.1.2按照步骤 2.2.1.1,对低频系数集合 再次进行哈尔小波滤波,输出第二 次滤波的低频系数集合和水平、 垂直、 对角线高频系数集合, 分别记为 LL2、 HL2、 LH2 和 HH2, 如此循环滤波 5次, 每次滤波以前次滤波得到的低频系数集合为输入, 输出新 的低频系数集合和水平、 垂直、 对角线高频系数集合;
步骤 2.2.2 提取中频信息图像
提取并保存最后一次滤波输出的水平高频系数集合 HL5,垂直高频系数集合 LH5和对 角线高频系数集合 HH , 以 HL , LH和 HH 中的元素为像素的属性,构成三幅 16x16像 素大小的图像, 称为水平中频信息图像、 垂直中频信息图像和对角线中频信息图像; 步骤 3用小波域结构化相似度算法分别计算测试模型和库集模型的相似度, 计算方 法如下:
步骤 3. 1 计算测试模型的水平中频信息图像与库集模型的水平中频信息图像的相似 sHL, 测试模型的垂直中频信息图像与库集模型的垂直中频信息图像的相似度 H, 测 试模型的对角线中频信息图像与库集模型的对角线中频信息图像的相似度^ ^, 将^ ^、
SLH、 SHH相加作为测试模型和库集模型的相似度, 所述的^ ^、 SLH、 SHH分别使用待匹 配的水平中频信息图像、 垂直中频信息图像和对角线中频信息图像并采用小波域结构化 相似度算法得到, 所述的小波域结构化相似度算法为:
步骤 3.1.1 根据水平中频信息图像、垂直中频信息图像和对角线中频信息图像的每个 像素点的 x, y, z 的三个属性, 分别将水平中频信息图像、 垂直中频信息图像和对角线 中频信息图像的所有像素的 X属性按照所属像素的顺序排列, 并分别构成水平中频信息 图像、 垂直中频信息图像和对角线中频信息图像的 X通道, 以此类推, 分别构成并得到 水平中频信息图像、 垂直中频信息 图像的 y通道和 z通道, 记为:
Figure imgf000006_0001
其中 为^、 或 z, 表示为 X通道、 y通道或 ζ通道, ^为 (^中第 1行第 素, 2为 (^中第 1行第 2列的元素,……,(^为 中第 2行第 1列的元素, "16,16 为 中第 16行第 16列的元素, 将所述水平中频信息图像、垂直中频信息图像或对角线 中频信息图像称为中频信息图像, 计算两幅待匹配中频信息图像的 X通道的相似度 ^ 、 y通道的相似度 ^ 及 z通道的相似度 , 将^、 ^和 相加作为对应的两幅待匹配中 频信息图像的相似度 Sw、 „或 S , 所述 ^、 由以下方法得到:
用 C; 表示测试模型的一幅中频信息图像的 x、 y或 z通道,
Figure imgf000006_0002
C 表示库集模型的对应中频信息图像的同一通道, 其中,
Figure imgf000007_0001
示 来自测试模型, g表示 来自库集模型,用《和;5表示 (^和 中元素的行数和列 α-1,β-1 Ca
数, 用 (^(«, ) 'α,β 表示 (^中的 3x3像素邻域, 元素 为 (^中
■■α+Ι,β+1
Ca-l, -l Ca-l, C α-Ι,β+l
的 3x3像素邻域的中心元素,用 s s s
^α,β-Ι L α,β ^α, +Ι 表示中的 3x3像素邻 S s s
^α+Ι,β-Ι ^α+Ι, ^α+Ι,β+Ι
域,元素 ca g P为 中的 3 χ 3像素邻域的中心元素, ,β和 ca g p的结构相似度 (aH
Figure imgf000007_0002
其中, ei、 表示 (^(«, )和 C (a, )中元素的行下标和列下标, ( 2 是 ,ε2的共 轭值;
令 α = 2,3, ---,15, β = 2,3, · · · ,15,取 的平均值作为 Cp x和 的结构相似度:
Figure imgf000007_0003
步骤 4三维人脸的身份识别
重复步骤 1~3, 得到测试模型和各个库集模型的相似度, 比较测试模型和各个库集模 型的相似度, 将相似度最大的库集模型与测试模型判定为同一个体。
本发明针对三维人脸识别中的表情变化问题提出了一种以中频信息图像作为表情不 变特征, 用小波域结构化相似度算法计算测试模型和库集模型相似度的三维人脸识别方 法。
从视觉角度出发,三维人脸模型描述的人脸信息包括三个部分:表征人脸轮廓外观的 整体信息, 表示人脸自身特征的细节信息和表示人脸表面细微纹理的噪声信息。 在人脸 有表情变化尤其是嘴部张开时, 轮廓外观即整体信息会产生较大形变而人脸自身特征即 细节信息不会随之改变。 因此我们选择三维人脸的细节信息作为表情不变特征, 把测试 模型和库集模型的匹配转化成表情不变特征之间的匹配。 本发明将三维人脸模型映射至 几何图像,然后用多尺度哈尔小波滤波器将几何图像分解为包含不同频段信息的子图像, 其中位于中频段的信息对应细节信息, 将包含中频段信息的子图像提取出来作为三维人 脸的表情不变特征, 称为中频信息图像。 最后用小波域结构化相似度算法计算测试模型 和库集模型的中频信息图像的相似度, 比较测试模型与三维人脸库集的各个库集模型的 相似度, 判定具有最大相似度的库集模型与测试模型属于同一个体。
本发明的优点及特点如下:
1) 对人脸点云进行了稀释。一般情况下, 点云越密集包含的信息越多, 但处理的时间越 长,一直以来这都需要一再权衡。而在生成几何图像时, 网格参数化和线性插值算法 不会因为点云稀释而失去大量有用信息。相反地, 如果不进行稀释, 想要得到三维人 脸模型的几何图像, 运算量将成几何倍数增加, 严重影响识别效率和实时性。
2) 将三维人脸的形状信息转换到频域,提取中频信息图像作为表情不变特征。本发明通 过多尺度哈尔小波滤波器将人脸信息转换至频域,并把频域信息分解至互不重叠的频 段上, 其中低频段的信息对应人脸的整体信息, 中频段的信息对应人脸的细节信息, 高频段的信息对应噪声信息。由于人脸的细节信息具有较高的身份区分度以及表情不 变性,本发明提取包含中频信息的子图像作为三维人脸识别的特征,并称为中频信息 图像。此外, 多尺度哈尔小波滤波器能够生成水平、垂直和对角线方向的中频信息图 像, 其中水平中频信息图像反映人脸沿水平方向的边缘信息, 体现了眼睛、嘴巴的水 平特征;垂直中频信息图像包含人脸垂直方向的边缘信息,体现了人脸的鼻子等垂直 特征; 对角线中频信息图像保持了人脸对角线方向的边缘信息。将水平、垂直和对角 线中频信息图像共同作为三维人脸的表情不变特征,能够全面地捕捉和表现三维人脸 的细节信息, 具有较强的表情鲁棒性。
3) 采用了小波域结构相似度算法计算相似度。小波域结构相似度算法是结构相似度算法 在小波域的推广和改进,这种算法继承了结构相似度算法的优点并且更适合小波域中 中频信息图像的相似度计算。结构相似度算法按照人类视觉系统感知图像的方式,定 量地计算待匹配图像的结构信息差异。本发明在小波域中分别计算测试模型和库集模 型水平、垂直和对角线中频信息图像的结构相似度,并根据中频信息图像相似度的总 和判断测试人脸的身份。由于人脸不同区域的局部特征包含了丰富的细节信息, 因此 在计算测试模型和库集模型的相似度时,本发明分别计算测试模型的中频信息图像各 像素点与库集模型的对应中频信息图像对应像素点的局部结构相似度,最后把各局部 相似度的平均值作为对应中频信息图像的相似度。相对于传统的基于误差的相似度算 法,本发明使用的小波域结构相似度算法能够得到符合人类视觉系统感知习惯的识别 结果, 在一定程度上提高了三维人脸识别系统的识别精度。
附图说明
图 1是本发明所述三维人脸识别方法的流程图
图 2是原始人脸
图 3是平滑的上半张人脸
图 4是点云稀释后的上半张人脸
图 5是参数化网格
图 6是几何图像的灰度图
图 7是多尺度小波滤波示意图
图 8是水平、 垂直和对角线中频信息图像
图 9是识别方法的示意图
图 10是几何图像的彩色图
具体实施方式:
下面参照附图,对本发明具体实施方案做更为详细的描述。编程实现工具选用 Matlab R2009a ,实验数据来自 FRGC v2.0三维人脸数据库,由美国 University of Notre Dame采集, 数据库包括 466人的 4007幅三维人脸模型, 主要在 2003年秋季和 2004年春季采集。本 文将每个人的第一幅三维人脸作为库集模型, 其余均作为测试模型;
图 1是本发明所述三维人脸识别方法的流程图;
图 5是参数化网格, 对预处理后的三维人脸网格进行网格参数化, 将其映射到平面 上大小为 512 x 512像素的二维网格上; 图 6 是几何图像的灰度图, 将人脸网格顶点的三维坐标附加在参数化网格的对应顶 点上, 再用线性插值方法确定正四边形区域内各像素点的属性, 得到具有三维坐标属性 的二维几何图像, 即人脸的几何图像, 本图将几何图像以灰度图的形式显示;
图 7是多尺度哈尔小波滤波示意图, 首先用哈尔变换矩阵对几何图像依次进行行变 换和列变换, 得到低频系数的集合以及水平、 垂直和对角线方向的高频系数的集合, 对 低频系数的集合再次进行哈尔小波滤波, 输出新的低频系数的集合以及新的水平、 垂直 和对角线方向的高频系数集合, 如此循环滤波, 每次滤波以前次小波滤波输出的低频系 数的集合为输入, 输出新的低频的系数集合和水平、 垂直、 对角线方向高频系数的集合; 图 8是水平、 垂直和对角线中频信息图像, 第 5次哈尔小波滤波输出的水平、 垂直 和对角线高频系数的集合构成水平、 垂直和对角线中频信息图像 HL5, LH5, HH5 ; 图 9是识别方法的示意图, 对于一个测试模型与 个库集模型, 计算测试模型与各库 集模型的相似度, 判断具有最大相似度的库集模型与测试模型属于同一个体;
图 10是几何图像的彩色图。 几何图像的各个像素点具有三维坐标 X、 y、 Z属性, 本 图将 x、 y、 z属性作为彩色图像的 RGB属性, 把几何图像以彩色图的形式显示, 本图与 图 6为同一幅几何图像。
对测试模型和库集模型的处理步骤包括预处理步骤、 中频信息图像提取步骤、 小波域 结构相似度计算步骤和识别步骤。
步骤 1分别对测试模型和库集模型进行预处理, 所述的预处理为:
步骤 1.1 人脸切割
根据人脸点云的形状指数 Shape Index特征和几何约束确定鼻尖点的位置。人脸点云上 任一点 tv的形状指数 5 (M)由它的最大主曲率/ ^ ( )和最小主曲率 /r2 ( )确定:
SI(u) = tan ― ——。
2 π K^u) - K2 (U) 形状指数特征可以表示一个点所在邻域的凸凹程度, 越凸的曲面对应的形状指数越 大。 计算人脸点云中每一个点的形状指数, 选取形状指数在范围 (0.85-1.0) 内的点组成 的连通区域作为初始的鼻尖点候选区域。 计算人脸点云的质心位置, 在鼻尖候选区域选 择靠质心位置最近的一个连通区域作为鼻尖区域。 选取鼻尖区域的质心作为鼻尖点。
以鼻尖点为球心, 90mm为半径做球, 舍弃落在球体以外的点, 保留球体内的点作为 后续处理的人脸区域。
步骤 1.2 人脸表面平滑处理
对切割后的人脸点云用主成分分析 (Principal Component Analysis, PCA) 法进行姿态 校正, 经主成分分析得到 3个互相垂直的主轴方向, 以鼻尖点为原点, 选取最大的特征 值对应的特征向量作为 轴, 最小特征值对应的特征向量作为 Z轴, 建立右手坐标系, 并以所述右手坐标系为空间三维坐标系, 人脸点云中每个点由坐标系中 x、 y、 z坐标唯 一表示。 将空间三维坐标系中的人脸点云投影到 XOY平面, 然后进行投影点云的二维网格化 操作, 用点云的 2.5 维网格化算法进行曲面重构, 得到近似表征人脸曲面的空间三角网 格 。 空间三角网格 的顶点即三维坐标系中人脸点云的点记为 J = 1,2,···,//, 是 的 顶点个数, 设矩阵 W e /Tx /, 其中 R^表示 x 的实数空间, 当点 ^和点 之间没有 边时, =0, 当点 ν,·和 之间有边时, w(, ) = i¾ > o。 ·是与 v;和 的边 相 关的余割权重 = οοΙ(θν ) + cot(^.. ), 其中 , ςϋ表示边 (I, j)所在的两个相邻三角面片 中, 与边 相对的两个角。 构建局部平滑算子 = ΤΗ^, 其中 ) = 'ί^( )且 di =∑ .、 i ' B 是空间三角网格 ^中所有边界点的集合。 将 迭代地作用在空间三 维网格 f上, 得到表面平滑的三维人脸网格 y :
V = wwv。
步骤 1.3 上半张脸的切割
舍弃三维人脸网格 Ϋ中位于; y = -10平面以下的点, 保留受表情影响较小的上半张人 脸。
步骤 1.4 人脸点云稀释
采用空间采样法对上半张人脸的点进行稀释。 这种稀释数据的方法简单有效, 能够 在不失真的前提下縮减点的数目, 并且能得到在空间较均匀的点。 应用空间采样法时, 本发明取空间间隔距离 σ为 lmm。 具体的稀释方法如下:
求出待稀释的上半张人脸每个点的 σ邻域, 即与每个点的距离小于 σ的点的集合; 并对每个点增加一个 //ag标志, 初始化为 Γ。 从第一个点开始, 先看自身的 //ag标志是 否为 F, 如果为 f则查看下一个点, 如果为 Γ则查看其 σ邻域中每个点的 //ag标志, 将邻 域点中 flag为 T的点置为 F。 最后将所有 flag标志为 F的点删除, 得到稀释后的上半张 人脸模型。
对稀释后的上半张人脸模型再次进行三角网格化, 生成 7;个三角面片。 计算并保存 生成的三维人脸网格中每个空间三角面片的边长^, γη, γ , 1 = 1,2,-,η, 所有三角面 片边长的平均值为 f, 如果三角面片中存在有长度大于 4f的边, 则舍弃三角面片, 保留 三角面片的顶点。
至此,预处理过程将测试模型和库集模型转换成具有相同的光滑程度和密集程度的三 维人脸网格。
步骤 2.1 分别将测试模型和库集模型人脸的点云坐标信息映射到平面上, 分别形成 测试模型和库集模型的几何图像, 获取几何图像的方法如下:
2.1.1 网格参数化
将预处理后的三维人脸网格的边界点映射到平面上大小为 512x512像素的正四边 形的四条边上, 且将三维人脸网格除边界点以外的其它点经过网格参数化映射到正四边 形区域内, 得到平面网格 ^, 以平面上正四边形任一顶点为原点, 以与原点相交的两条 边所在方向为正方向, 建立逆时针坐标系 MON, 平面上的任意点由 m、 n坐标唯一表示, 在正四边形的四条边上, 按照逆时针方向从原点开始均匀采样 b个点, 采样点的坐标为
(mf°,»f 0), t = l2,—b, 其中 b是三维人脸网格的边界点个数。 记/ ¾为三维人脸网格的顶点, q二 1,2 ··,τ, r为顶点个数, 映射到正四边形区域内的 对应点的坐标为(《^, ), mq、 是以下线性方程的解:
Figure imgf000012_0001
其中 ·是三维人脸网格的拉普拉斯 Laplacian矩阵, 是三维人脸网格边界点的集合。 步骤 2.1.2 生成几何图像
将人脸网格顶点/ ¾ =(x¾,3 ¾)的三维坐标附加在对应点(《 )上, 作为点(《^, )的 属性, 再用线性插值方法确定正四边形区域内各像素点的属性, 得到具有三维坐标属性 的二维图像, 称为几何图像 G。
步骤 2.2 分别对测试模型和库集模型的几何图像 G滤波, 得到测试模型和库集模型 的中频信息, 几何图像的滤波采用如下方法:
步骤 2.2.1 对几何图像 G进行多尺度哈尔 Haar小波滤波
步骤 2.1.1.1 将大小为 512x512像素的几何图像 G分割成 2x2像素大小的区块, 利 用哈尔转换矩阵 H 对每一区块进行哈尔小波滤波。 用 A G
1 - 1 a、
Figure imgf000013_0001
中的某一区块, ou, a12 , a21 , 和 α22是区块中的元素, 对 Α 进行哈尔小波滤波即对 A 依次进行行变换和列变换:
A = H1 AH =
a 21 a 22 Ϊ是经过哈尔小波滤波后的区块, 其中 是区块 \的低频近似系数, 2是区块 \的水 平高频分量, i是区块 A的垂直高频分量, 2是区块 A的对角线高频分量。
几何图像 G 的所有区块进行哈尔小波滤波后, 将所有区块的低频近似系数按照所属 区块的顺序排列, 构成低频系数的集合 ZA, 水平高频分量按照所属区块的顺序排列, 构 成水平高频系数的集合 HL1 ; 垂直高频分量按照所属区块的顺序排列, 构成水垂直高频 系数的集合 LH 对角线高频分量按照所属区块的顺序排列, 构成对角线高频系数的集 合 HHi 步 I
Figure imgf000013_0002
., 对低频系数集合 再次进行哈尔小波滤波, 输出第 二次滤波的低频系数集合和水平、垂直、对角线高频系数集合,分别记为 LL2、 HL2、 LH2 和 HH2。 如此循环滤波 5次, 每次滤波以前次滤波输出的低频系数集合为输入, 输出新 的低频系数集合和水平、 垂直、 对角线高频系数集合。
步骤 2.2.2 提取中频信息图像
提取并保存最后一次滤波输出的水平高频系数集合 HL5,垂直高频系数集合 LH5和对 角线高频系数集合 HH5, 以 HL5, ^5和^5中的元素为像素的属性,构成三幅 16 x 16像 素大小的图像, 称为水平中频信息图像、 垂直中频信息图像和对角线中频信息图像。
至此, 三维人脸网格转换为水平、垂直中频信息图像和对角线中频信息图像, 测试模 型和库集模型的匹配转化为对应中频信息图像的匹配。
步骤 3 用小波域结构化相似度算法分别计算测试模型和库集模型的相似度, 计算方 法如下:
1 1 步骤 3. 1 计算测试模型的水平中频信息图像与库集模型的水平中频信息图像的相似 度 SHL , 测试模型的垂直中频信息图像与库集模型的垂直中频信息图像的相似度 H, 测 试模型的对角线中频信息图像与库集模型的对角线中频信息图像的相似度^ ^, 将^ ^、
SLH、 SHH相加作为测试模型和库集模型的相似度, 所述的^ ^、 SLH、 SHH分别使用待匹 配的水平中频信息图像、 垂直中频信息图像和对角线中频信息图像并采用小波域结构化 相似度算法得到, 所述的小波域结构化相似度算法为:
步骤 3.1.1 根据水平中频信息图像、 垂直中频信息图像和对角线中频信息图像的每 个像素点的 X, y, Z的三个属性, 分别将水平中频信息图像、 垂直中频信息图像和对角 线中频信息图像的所有像素的 X属性按照所属像素的顺序排列, 并分别构成水平中频信 息图像、 垂直中频信息图像和对角线中频信息图像的 X通道, 以此类推, 分别构成并得 到水平中频信息图像、 垂直中频信息图像和对角线中频信息图像的 y通道和 z通道, 记 为:
Figure imgf000014_0001
其中 为 x、 y或 z, 表示为 x通道、 y通道或 z通道, ^为 (^中第 1行第 素, c1 2为 (^中第 1行第 2列的元素,……, c2 1为 (^中第 2行第 1列的元素, "16,16 为 中第 16行第 16列的元素, 将所述水平中频信息图像、垂直中频信息图像或对角线 中频信息图像称为中频信息图像, 计算两幅待匹配中频信息图像的 X通道的相似度 ^ 、 y通道的相似度 ^ 及 z通道的相似度 , 将 ^、 和 相加作为对应的两幅待匹配中 频信息图像的相似度 Sw、 „或 S , 所述 ^、 s 由以下方法得到:
用 C: 表示测试模型的一幅中频信息图像的 x、 y或 z通道,
Figure imgf000014_0002
表示库集模型的对应中频信息图像的同一通道, 其中, p表
Figure imgf000015_0001
示 来自测试模型, g表示 来自库集模型,用 α和 表示 (^和 中元素的行数和列 数, 用 中的 3x3像素邻域, 元素 为 (^中
的 3x3 表示中的 3x3像素邻
Figure imgf000015_0002
域,元素 ca g P为 中的 3 X 3像素邻域的中心元素, ,β和 ca g p的结构相似度 (aH
+ 0.1) sx{a, ) =
(∑ ∑^ 2|2 +∑ ∑ — +。.1X2∑ l+o.i) 其中, ei、 表示 (^(«, )和 C (a, )中元素的行下标和列下标, (c 2)*是 ,ε2的共轭 值。
令 = 2,3,···,15, β = !,?>,■■■, 15, 取 的平均值作为 (^和 的结构相似度:
Figure imgf000015_0003
(«, )
步骤 4三维人脸的身份识别
重复步骤 1~3, 得到测试模型和各个库集模型的相似度, 比较测试模型和各个库集模型 的相似度, 将相似度最大的库集模型与测试模型判定为同一个体。

Claims

权利要求书
1. 一种基于几何图像中中频信息的三维人脸识别方法, 包括以下步骤:
步骤 1 分别对测试模型和库集模型进行预处理, 所述的预处理为:
步骤 1.1 人脸切割
根据人脸点云的形状指数 Shape Index特征和几何约束确定鼻尖点位置, 以该点为球 心, 90mm为半径做球, 舍弃落在球体以外的点, 保留球体内的点并作为后续处理的人脸 区域;
步骤 1.2 人脸表面平滑处理
对切割后的人脸点云用主成分分析法 (Principal Component Analysis, PC A) 进行姿态 校正, 经主成分分析得到 3 个互相垂直的主轴方向, 以鼻尖点为原点, 选取最大的特征 值对应的特征向量作为 ; F轴, 最小特征值对应的特征向量作为 Z轴, 建立右手坐标系, 并以所述右手坐标系为空间三维坐标系, 人脸点云中每个点由坐标系中 c、 y, z坐标唯一 表示;
对空间三维坐标系中的人脸点云三角化, 得到空间三角网格, 然后用基于网格的平滑 算法对人脸区域进行平滑去噪, 经过 10次迭代处理, 得到表面平滑的三维人脸网格; 步骤 1.3 上半张脸的切割
舍弃三维人脸网格中位于 y = -10平面以下的点, 保留受表情影响较小的上半张人 脸;
步骤 1.4 人脸点云稀释
人脸的点云按照空间距离进行均匀采样, 采样间隔为 lmm, 得到稀释的点云, 对稀释 的点云进行三角网格化, 计算并保存生成的三维人脸网格中每个空间三角面片的边长 γη , γη , γη , / = 1,2,…,;;, 其中;;是网格中三角面片的个数, 所有三角面片边长的平均 值为 ^, 如果三角面片中存在有长度大于 4j7的边, 则舍弃三角面片, 保留三角面片的顶 点;
步骤 2.1 分别将测试模型和库集模型人脸的点云坐标信息映射到平面上, 分别形成测 试模型和库集模型的几何图像, 获取几何图像的方法如下:
步骤 2.1.1 网格参数化
将预处理后的三维人脸网格的边界点映射到平面上大小为 512 x 512像素的正四边形 的四条边上, 且将三维人脸网格除边界点以外的其它点经过网格参数化映射到正四边形 区域内, 得到平面网格 ^, 以平面上正四边形任一顶点为原点, 以与原点相交的两条边 所在方向为正方向, 建立逆时针坐标系 MON, 平面上的任意点由 m、 n坐标唯一表示, 在正四边形的四条边上, 按照逆时针方向从原点开始均匀采样 b 个点, 采样点的坐标为 (mf°,»f 0), t = \,2,-b, 其中 b是三维人脸网格的边界点个数;
记/ ¾为三维人脸网格的顶点, g = l,2,— ,r, r为顶点个数, 映射到正四边形区域内的 对应点的坐标为(m。,《。), ma、 是以下线性方程的解:
Lmq =Lnq =0,Vfq ^B
=",νΛ
其中 L 是三维人脸网格的拉普拉斯 Laplacian 矩阵, β是三维人脸网格边界点的集 合. 步骤 2.1.2 生成几何图像
将人脸网格顶点/ ¾ =(x¾,3 ¾)的三维坐标附加在对应点(《 )上, 作为点(《^, )的 属性, 再用线性插值方法确定正四边形区域内各像素点的属性, 得到具有三维坐标属性 的二维图像, 称为几何图像 G;
步骤 2.2 分别对测试模型和库集模型的几何图像 G滤波, 得到测试模型和库集模型的 中频信息, 几何图像的滤波采用如下方法:
步骤 2.2.1 对几何图像 G进行多尺度哈尔 Haar小波滤波
1 1
步骤 2.2.1.1 利用哈尔转换矩阵 H= 对几何图像 G依次进行行变换和列变
1 -1
换, 得到低频系数的集合以及水平、 垂直和对角线高频系数的集合, 将低频系数的集合 记为 Z , 水平、 垂直和对角线高频系数的集合分别记为 HL^ LH BHH1 ;
步骤 2.2.1.2 按照步骤 2.2.1.1, 对低频系数集合 再次进行哈尔小波滤波, 输出第二 次滤波的低频系数集合和水平、 垂直、 对角线高频系数集合, 分别记为 LL2、 HL2、 LH2 和 HH2, 如此循环滤波 5 次, 每次滤波以前次滤波得到的低频系数集合为输入, 输出新 的低频系数集合和水平、 垂直、 对角线高频系数集合;
步骤 2.2.2 提取中频信息图像
提取并保存最后一次滤波输出的水平高频系数集合 HL5, 垂直高频系数集合 LH5和对 角线高频系数集合 HH5, 以 HL5, ^5和^5中的元素为像素的属性, 构成三幅 16x16 像素大小的图像, 称为水平中频信息图像、 垂直中频信息图像和对角线中频信息图像; 步骤 3 用小波域结构化相似度算法分别计算测试模型和库集模型的相似度, 计算方 法如下: 步骤 3. 1 计算测试模型的水平中频信息图像与库集模型的水平中频信息图像的相似 度 SHL , 测试模型的垂直中频信息图像与库集模型的垂直中频信息图像的相似度^ ^ 测 试模型的对角线中频信息图像与库集模型的对角线中频信息图像的相似度^ ^, 将^ ^、 SLH、 SHH相加作为测试模型和库集模型的相似度, 所述的^ ^、 SLH、 SHH分别使用待匹 配的水平中频信息图像、 垂直中频信息图像和对角线中频信息图像并采用小波域结构化 相似度算法得到, 所述的小波域结构化相似度算法为:
步骤 3.1.1 根据水平中频信息图像、 垂直中频信息图像和对角线中频信息图像的每个 像素点的; c, y , z的三个属性, 分别将水平中频信息图像、 垂直中频信息图像和对角线中 频信息图像的所有像素的 属性按照所属像素的顺序排列, 并分别构成水平中频信息图 像、 垂直中频信息图像和对角线中频信息图像的 c通道, 以此类推, 分别构成并得到水平 中频信息图像、 垂直中频信息图像和对角线中频信息图像的 y通道和 z通道, 记为: c, c,
Figure imgf000018_0001
其中 为^、 y或 z, 表示为; c通道、 y通道或 z通道, cu为 中第 1行第 1列的元 素, 2为 (^中第 1行第 2列的元素, ……, (:2 1为 (^中第 2行第 1列的元素, ……, c1W6为 <^中第 16行第 16列的元素, 将所述水平中频信息图像、 垂直中频信息图像或对 角线中频信息图像称为中频信息图像, 计算两幅待匹配中频信息图像的; c通道的相似度 sx 、 y通道的相似度 及 z通道的相似度 ^ , 将 、 ^和 相加作为对应的两幅待匹 配中频信息图像的相似度 Sm、 „或5„„, 所述 、 s y , 由以下方法得到:
表示测试模型的一幅中频信息图像的 C、 y或 Z通道,
库集模型的对应中频信息图像的同一通道, 其中,
Figure imgf000018_0002
示 (^来自测试模型, g表示 来自库集模型, 用《和 表示 和 中元素的行数和列 数, 用 c a ) 表示 中的 3x3像素邻域, 元素 c
Figure imgf000019_0001
α-1,β-1 C χ-Ι,β+l
的 3x3像素邻域的中心元素, 用 (^(«, ) 表示中的 3x3像素邻 c α+Ι,β+Ι
域, 元素 为 中的 3x3像素邻域的中心元素, 和 的结构相似度
Figure imgf000019_0002
其中, ei、 表示 <^(«, )和 C («, )中元素的行下标和列下标, 是 ,ε2的共轭 值;
令 α = 2,3,···,15, β = !,?>,■■■, 15 , 取 的平均值作为 (^和 的结构相似度:
Figure imgf000019_0003
;
步骤 4三维人脸的身份识别
重复步骤 1~3, 得到测试模型和各个库集模型的相似度, 比较测试模型和各个库集模型的 相似度, 将相似度最大的库集模型与测试模型判定为同一个体。
PCT/CN2012/071728 2011-12-21 2012-02-28 基于几何图像中中频信息的三维人脸识别方法 WO2013091304A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020137001007A KR101314131B1 (ko) 2011-12-21 2012-02-28 기하학적 이미지 중의 중파 정보에 기반한 삼차원 얼굴 식별방법
US14/364,280 US9117105B2 (en) 2011-12-21 2012-02-28 3D face recognition method based on intermediate frequency information in geometric image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110431073.2 2011-12-21
CN2011104310732A CN102592136B (zh) 2011-12-21 2011-12-21 基于几何图像中中频信息的三维人脸识别方法

Publications (1)

Publication Number Publication Date
WO2013091304A1 true WO2013091304A1 (zh) 2013-06-27

Family

ID=46480746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/071728 WO2013091304A1 (zh) 2011-12-21 2012-02-28 基于几何图像中中频信息的三维人脸识别方法

Country Status (4)

Country Link
US (1) US9117105B2 (zh)
KR (1) KR101314131B1 (zh)
CN (1) CN102592136B (zh)
WO (1) WO2013091304A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (zh) * 2018-03-28 2018-09-11 电子科技大学 基于多尺度联合特征编码器的单样本人脸识别方法
CN113362465A (zh) * 2021-06-04 2021-09-07 中南大学 非刚性三维形状逐点对应方法及人体心脏运动仿真方法

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160027172A1 (en) 2012-04-04 2016-01-28 James G. Spahn Method of Monitoring the Status of a Wound
KR102094723B1 (ko) * 2012-07-17 2020-04-14 삼성전자주식회사 견고한 얼굴 표정 인식을 위한 특징 기술자
US8437513B1 (en) * 2012-08-10 2013-05-07 EyeVerify LLC Spoof detection for biometric authentication
CN102902967B (zh) * 2012-10-16 2015-03-11 第三眼(天津)生物识别科技有限公司 基于人眼结构分类的虹膜和瞳孔的定位方法
WO2014126213A1 (ja) * 2013-02-15 2014-08-21 Necソリューションイノベータ株式会社 類似判断の候補配列情報の選択装置、選択方法、およびそれらの用途
CN103489011A (zh) * 2013-09-16 2014-01-01 广东工业大学 一种具有拓扑鲁棒性的三维人脸识别方法
EP3084682B1 (en) * 2013-12-19 2019-07-24 Avigilon Fortress Corporation System and method for identifying faces in unconstrained media
RU2014111792A (ru) * 2014-03-27 2015-10-10 ЭлЭсАй Корпорейшн Процессор изображений, содержащий систему распознавания лиц на основании преобразования двухмерной решетки
CN104112115A (zh) * 2014-05-14 2014-10-22 南京国安光电科技有限公司 一种三维人脸检测与识别技术
CN104474709A (zh) * 2014-11-24 2015-04-01 苏州福丰科技有限公司 一种基于三维人脸识别的游戏方法
CN104504410A (zh) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 基于三维点云的三维人脸识别装置和方法
CN104573722A (zh) * 2015-01-07 2015-04-29 深圳市唯特视科技有限公司 基于三维点云的三维人脸种族分类装置和方法
CN106485186B (zh) * 2015-08-26 2020-02-18 阿里巴巴集团控股有限公司 图像特征提取方法、装置、终端设备及系统
CN106447624A (zh) * 2016-08-31 2017-02-22 上海交通大学 一种基于l0范数的三维网格去噪方法
CN107958489B (zh) * 2016-10-17 2021-04-02 杭州海康威视数字技术股份有限公司 一种曲面重建方法及装置
KR20180065135A (ko) * 2016-12-07 2018-06-18 삼성전자주식회사 셀프 구조 분석을 이용한 구조 잡음 감소 방법 및 장치
US10860841B2 (en) 2016-12-29 2020-12-08 Samsung Electronics Co., Ltd. Facial expression image processing method and apparatus
CN107358655B (zh) * 2017-07-27 2020-09-22 秦皇岛燕大燕软信息系统有限公司 基于离散平稳小波变换的半球面和圆锥面模型的辨识方法
US10861196B2 (en) 2017-09-14 2020-12-08 Apple Inc. Point cloud compression
US11818401B2 (en) 2017-09-14 2023-11-14 Apple Inc. Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables
US10909725B2 (en) 2017-09-18 2021-02-02 Apple Inc. Point cloud compression
US11113845B2 (en) 2017-09-18 2021-09-07 Apple Inc. Point cloud compression using non-cubic projections and masks
CN107748871B (zh) * 2017-10-27 2021-04-06 东南大学 一种基于多尺度协方差描述子与局部敏感黎曼核稀疏分类的三维人脸识别方法
KR101958194B1 (ko) * 2017-11-30 2019-03-15 인천대학교 산학협력단 변환 도메인의 대조비 개선 장치 및 방법
CN108734772A (zh) * 2018-05-18 2018-11-02 宁波古德软件技术有限公司 基于Kinect fusion的高精度深度图像获取方法
US11017566B1 (en) 2018-07-02 2021-05-25 Apple Inc. Point cloud compression with adaptive filtering
CN109145716B (zh) * 2018-07-03 2019-04-16 南京思想机器信息科技有限公司 基于脸部识别的登机口检验平台
US11202098B2 (en) 2018-07-05 2021-12-14 Apple Inc. Point cloud compression with multi-resolution video encoding
US11367224B2 (en) 2018-10-02 2022-06-21 Apple Inc. Occupancy map block-to-patch information compression
CN109379511B (zh) * 2018-12-10 2020-06-23 盎锐(上海)信息科技有限公司 3d数据安全加密算法及装置
CN109978989B (zh) * 2019-02-26 2023-08-01 腾讯科技(深圳)有限公司 三维人脸模型生成方法、装置、计算机设备及存储介质
CN109919876B (zh) * 2019-03-11 2020-09-01 四川川大智胜软件股份有限公司 一种三维真脸建模方法及三维真脸照相系统
US11030801B2 (en) * 2019-05-17 2021-06-08 Standard Cyborg, Inc. Three-dimensional modeling toolkit
US11074753B2 (en) * 2019-06-02 2021-07-27 Apple Inc. Multi-pass object rendering using a three- dimensional geometric constraint
US11711544B2 (en) 2019-07-02 2023-07-25 Apple Inc. Point cloud compression with supplemental information messages
US10853631B2 (en) 2019-07-24 2020-12-01 Advanced New Technologies Co., Ltd. Face verification method and apparatus, server and readable storage medium
CN110675413B (zh) * 2019-09-27 2020-11-13 腾讯科技(深圳)有限公司 三维人脸模型构建方法、装置、计算机设备及存储介质
US11409998B2 (en) * 2019-10-02 2022-08-09 Apple Inc. Trimming search space for nearest neighbor determinations in point cloud compression
US11895307B2 (en) 2019-10-04 2024-02-06 Apple Inc. Block-based predictive coding for point cloud compression
CN111027515A (zh) * 2019-12-24 2020-04-17 神思电子技术股份有限公司 一种人脸库照片更新方法
CN111079701B (zh) * 2019-12-30 2023-03-24 陕西西图数联科技有限公司 一种基于图像质量的人脸防伪方法
US11593967B2 (en) 2020-01-08 2023-02-28 Samsung Electronics Co., Ltd. Attribute transfer in V-PCC
US11798196B2 (en) 2020-01-08 2023-10-24 Apple Inc. Video-based point cloud compression with predicted patches
CN111402401B (zh) * 2020-03-13 2023-08-18 北京华捷艾米科技有限公司 一种获取3d人脸数据方法、人脸识别方法及装置
CN111640055B (zh) * 2020-05-22 2023-04-11 构范(厦门)信息技术有限公司 一种二维人脸图片变形方法及系统
CN112418030B (zh) * 2020-11-11 2022-05-13 中国标准化研究院 一种基于三维点云坐标的头面部号型分类方法
CN112785615A (zh) * 2020-12-04 2021-05-11 浙江工业大学 一种基于扩展二维经验小波变换的工程表面多尺度滤波方法
CN112686230B (zh) * 2021-03-12 2021-06-22 腾讯科技(深圳)有限公司 对象识别方法、装置、设备以及存储介质
CN113111548B (zh) * 2021-03-27 2023-07-21 西北工业大学 一种基于周角差值的产品三维特征点提取方法
US11948338B1 (en) 2021-03-29 2024-04-02 Apple Inc. 3D volumetric content encoding using 2D videos and simplified 3D meshes
CN113409377B (zh) * 2021-06-23 2022-09-27 四川大学 一种基于跳跃连接式生成对抗网络的相位展开方法
CN113610039B (zh) * 2021-08-17 2024-03-15 北京融合汇控科技有限公司 基于云台相机的风漂异物识别方法
CN113902781A (zh) * 2021-10-18 2022-01-07 深圳追一科技有限公司 三维人脸重建方法、装置、设备及介质
CN114296050B (zh) * 2022-03-07 2022-06-07 南京鼐云信息技术有限责任公司 基于激光雷达云图探测的光伏电站短期发电功率预测方法
CN114821720A (zh) * 2022-04-25 2022-07-29 广州瀚信通信科技股份有限公司 人脸检测方法、装置、系统、设备及存储介质
CN116226426B (zh) * 2023-05-09 2023-07-11 深圳开鸿数字产业发展有限公司 基于形状的三维模型检索方法、计算机设备和存储介质
CN117974817B (zh) * 2024-04-02 2024-06-21 江苏狄诺尼信息技术有限责任公司 基于图像编码的三维模型纹理数据高效压缩方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159627A1 (en) * 2001-02-28 2002-10-31 Henry Schneiderman Object finder for photographic images
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
CN101986328A (zh) * 2010-12-06 2011-03-16 东南大学 一种基于局部描述符的三维人脸识别方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60216411T2 (de) 2001-08-23 2007-10-04 Sony Corp. Robotervorrichtung, gesichtserkennungsverfahren und gesichtserkennungsvorrichtung
KR100608595B1 (ko) 2004-11-16 2006-08-03 삼성전자주식회사 얼굴 인식 방법 및 장치
KR100723417B1 (ko) 2005-12-23 2007-05-30 삼성전자주식회사 얼굴 인식 방법, 그 장치, 이를 위한 얼굴 영상에서 특징추출 방법 및 그 장치
CN100409249C (zh) * 2006-08-10 2008-08-06 中山大学 一种基于网格的三维人脸识别方法
CN101261677B (zh) * 2007-10-18 2012-10-24 周春光 人脸的特征提取方法
CN101650777B (zh) * 2009-09-07 2012-04-11 东南大学 一种基于密集点对应的快速三维人脸识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159627A1 (en) * 2001-02-28 2002-10-31 Henry Schneiderman Object finder for photographic images
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
CN101986328A (zh) * 2010-12-06 2011-03-16 东南大学 一种基于局部描述符的三维人脸识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CAI, LIANG ET AL.: "Three dimensions face recognition by using shape filtering and geometry image", JOURNAL OF IMAGE AND GRAPHICS, vol. 16, no. 7, July 2011 (2011-07-01), pages 1303 - 1309 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (zh) * 2018-03-28 2018-09-11 电子科技大学 基于多尺度联合特征编码器的单样本人脸识别方法
CN113362465A (zh) * 2021-06-04 2021-09-07 中南大学 非刚性三维形状逐点对应方法及人体心脏运动仿真方法
CN113362465B (zh) * 2021-06-04 2022-07-15 中南大学 非刚性三维形状逐点对应方法及人体心脏运动仿真方法

Also Published As

Publication number Publication date
CN102592136A (zh) 2012-07-18
KR20130084654A (ko) 2013-07-25
US9117105B2 (en) 2015-08-25
US20140355843A1 (en) 2014-12-04
CN102592136B (zh) 2013-10-16
KR101314131B1 (ko) 2013-10-04

Similar Documents

Publication Publication Date Title
WO2013091304A1 (zh) 基于几何图像中中频信息的三维人脸识别方法
CN110348330B (zh) 基于vae-acgan的人脸姿态虚拟视图生成方法
US10083366B2 (en) Edge-based recognition, systems and methods
US7512255B2 (en) Multi-modal face recognition
JP4445864B2 (ja) 三次元顔認識
JP2021507394A (ja) 多特徴検索と変形に基づく人体髪型の生成方法
WO2022041627A1 (zh) 一种活体人脸检测方法及系统
WO2017059591A1 (zh) 手指静脉识别方法及装置
CN111091075B (zh) 人脸识别方法、装置、电子设备及存储介质
WO2015067084A1 (zh) 人眼定位方法和装置
WO2012126135A1 (en) Method of augmented makeover with 3d face modeling and landmark alignment
CN103971122B (zh) 基于深度图像的三维人脸描述方法
CN109766866B (zh) 一种基于三维重建的人脸特征点实时检测方法和检测系统
CN102779269A (zh) 基于图像传感器成像系统的人脸识别算法
WO2018133119A1 (zh) 基于深度相机进行室内完整场景三维重建的方法及系统
JP5018029B2 (ja) 認証システム及び認証方法
CN106415606B (zh) 一种基于边缘的识别、系统和方法
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
CN109074471B (zh) 一种基于主动外观模型的虹膜区域分割方法及装置
CN108090460B (zh) 基于韦伯多方向描述子的人脸表情识别特征提取方法
Stylianou et al. Image based 3d face reconstruction: a survey
Kong et al. Effective 3d face depth estimation from a single 2d face image
Cheng et al. Tree skeleton extraction from a single range image
CN111753652B (zh) 一种基于数据增强的三维人脸识别方法
Ramadan et al. 3D Face compression and recognition using spherical wavelet parametrization

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20137001007

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12858733

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14364280

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12858733

Country of ref document: EP

Kind code of ref document: A1