WO2013091304A1 - 基于几何图像中中频信息的三维人脸识别方法 - Google Patents
基于几何图像中中频信息的三维人脸识别方法 Download PDFInfo
- Publication number
- WO2013091304A1 WO2013091304A1 PCT/CN2012/071728 CN2012071728W WO2013091304A1 WO 2013091304 A1 WO2013091304 A1 WO 2013091304A1 CN 2012071728 W CN2012071728 W CN 2012071728W WO 2013091304 A1 WO2013091304 A1 WO 2013091304A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- intermediate frequency
- face
- frequency information
- information image
- model
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- the invention relates to a three-dimensional face recognition method based on intermediate frequency information in a geometric image, and performs mesh parameterization and linear interpolation on any preprocessed three-dimensional face model to obtain a geometric image, and uses a multi-scale Haar wavelet filter to
- the IF information image with identity discrimination is extracted from the geometric image as the expression invariant feature of the 3D face model.
- the similarity degree of the intermediate frequency information image of the test model and the library set model is calculated by the wavelet domain structured similarity algorithm to determine the test model.
- Identity The intermediate frequency information image of the three-dimensional face model proposed by the invention has good identity representation, and effectively reduces the influence of expression changes on three-dimensional face recognition.
- the wavelet domain structured similarity algorithm accurately calculates the structural information similarity of the intermediate frequency information image of the test model and the library set model, and significantly improves the recognition rate of the 3D face recognition method.
- Biometrics has important applications in the security field. Especially compared with feature recognition technologies such as fingerprints and irises, automatic face recognition technology is increasingly used for its advantages of non-contact, high acceptability and good concealment. Concerns, there is huge room for development.
- 3D face recognition technology is limited by factors such as illumination, posture, and makeup, and 3D face recognition technology can overcome or mitigate the adverse effects of these factors.
- the 3D face model has more information than the 2D image, which is a more accurate description of the true form of the face.
- the 3D face model has a large amount of data, many interference areas, and a large amount of computation, and the non-rigid deformation caused by facial expression affects the performance of the 3D face recognition method based on geometric information. Therefore, how to reduce the amount of calculation and reduce the influence of facial expression becomes the bottleneck of 3D face recognition technology, which is also a key issue for research.
- the invention provides a three-dimensional face recognition method based on intermediate frequency information in a geometric image, which can improve the recognition rate.
- the invention adopts the following technical solutions:
- a three-dimensional face recognition method based on intermediate frequency information in geometric images characterized in that multi-scale Haar wavelet filtering is performed on the geometric images of the test model and the library set model respectively, and horizontal IF information images of the test model and the library set model are obtained.
- the vertical intermediate frequency information image and the diagonal intermediate frequency information image are calculated by the wavelet domain structured similarity algorithm to calculate the similarity of the corresponding intermediate frequency information image and added as the total similarity of the test model and the library set model, and finally according to the test face and The similarity of the faces of each library set in the 3D face database set, and the library set model with the largest similarity is determined as the recognition The result.
- the processing includes a pre-processing step, an intermediate frequency information image extraction step, a wavelet domain structured similarity calculation step, and an identification step.
- Step 1 Preprocess the test model and the library set model respectively.
- the preprocessing is:
- the position of the tip of the nose is determined.
- the point is the center of the sphere, the radius of 90mm is used as the ball, and the point falling outside the sphere is discarded, and the point in the sphere is reserved and used as a follow-up treatment.
- Step 1.2 Smoothing the surface of the face
- the face cloud of the cut face is corrected by Principal Component Analysis (PCA).
- PCA Principal Component Analysis
- Three principal axis directions are obtained by principal component analysis.
- the nose point is used as the origin, and the largest feature value is selected.
- the feature vector is used as the axis, and the feature vector corresponding to the minimum feature value is used as the Z axis, and the right hand coordinate system is established, and the right hand coordinate system is used as the spatial three-dimensional coordinate system.
- Each point in the face point cloud is represented by x, y in the coordinate system.
- the z coordinate is uniquely represented;
- the point cloud of the face is uniformly sampled according to the spatial distance, the sampling interval is lmm, the diluted point cloud is obtained, the diluted point cloud is triangularly meshed, and each spatial triangular surface in the generated three-dimensional face mesh is calculated and saved.
- the side lengths ⁇ ⁇ , ⁇ ⁇ , ⁇ ⁇ , / 1,2,...,;;, where;; are the number of triangular patches in the grid, and the average of the sides of all triangular patches is f, If there are edges with a length greater than 4f in the triangular patch, the triangular patches are discarded, and the vertices of the triangular patches are retained;
- Step 2.1 Map the point cloud coordinate information of the test model and the library set model face to the plane respectively, and form the geometric image of the test model and the library set model respectively.
- the method for obtaining the geometric image is as follows:
- Step 2.1.1 Grid parameterization Map the boundary points of the preprocessed 3D face mesh to the four sides of the regular quadrilateral of 512 x 512 pixels on the plane, and divide the 3D face mesh except the boundary point.
- the other points are mapped into the regular quadrilateral region through the grid parameterization, and the plane mesh ⁇ is obtained, and any vertex of the regular quadrilateral on the plane is taken as the origin, and the two intersect with the origin.
- the direction of the edge is the positive direction, and the counterclockwise coordinate system MON is established. Any point on the plane is uniquely represented by the m and n coordinates.
- b On the four sides of the regular quadrilateral, b points are uniformly sampled from the origin in the counterclockwise direction, and the sampling point
- Step 2.2 separately filter the geometric image G of the test model and the library set model to obtain the intermediate frequency information of the test model and the library set model.
- the filtering of the geometric image adopts the following methods:
- Step 2.2.1 Perform multi-scale Hal Haar wavelet filtering on the geometric image G Step 2.2.1.1 Use the Haar transformation matrix H 4 1 1
- the geometric image G is sequentially subjected to row transformation and column transformation 1 -1 to obtain a set of low frequency coefficients and a set of horizontal, vertical and diagonal high frequency coefficients, and the set of low frequency coefficients is denoted as L, horizontal, vertical and diagonal.
- HLp LH BHH 1 collection line frequency coefficients are referred to as HLp LH BHH 1; following the procedure of step 2.2.1.2 2.2.1.1, low frequency Haar wavelet coefficients for filtering again, the output of the low frequency coefficients and the second set of horizontal filtering, vertical, for The set of angular high frequency coefficients are recorded as LL 2 , HL 2 , LH 2 and HH 2 respectively , and the loop filtering is performed 5 times, and the low frequency coefficient set obtained by filtering the previous time is filtered as an input, and the new low frequency coefficient set and level are output. , vertical, diagonal high frequency coefficient sets;
- Step 2.2.2 Extracting the intermediate frequency information image
- Step 3 uses the wavelet domain structured similarity algorithm to calculate the similarity between the test model and the library set model respectively. The calculation method is as follows:
- Step 3 Calculate the similarity of the horizontal intermediate frequency information image of the test model and the horizontal intermediate frequency information image of the library set model.
- HL the similarity H of the vertical intermediate frequency information image of the test model and the vertical intermediate frequency information image of the library set model, the test model
- the similarity of the diagonal intermediate frequency information image and the diagonal intermediate frequency information image of the library set model ⁇ ⁇ , will ⁇ ⁇ ,
- S LH and S HH are added as the similarity between the test model and the library set model, and the ⁇ ⁇ , S LH , and S HH respectively use the horizontal intermediate frequency information image to be matched, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image.
- the wavelet domain structured similarity algorithm is used, and the wavelet domain structured similarity algorithm is:
- Step 3.1.1 According to the horizontal IF information image, the vertical intermediate frequency information image and the three attributes of x, y, z of each pixel of the diagonal intermediate frequency information image, the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the pair respectively
- the X attributes of all the pixels of the angular intermediate frequency information image are arranged in the order of the pixels to which they belong, and constitute the X channel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image, respectively, and so on, respectively, and are horizontally constructed.
- the IF channel and the z channel of the IF information image and the vertical IF information image are recorded as:
- ⁇ is, or Z, denoted as X-channel, y ⁇ channel or channels, as ⁇ ( ⁇ 1 of the first pixel row, the first row of elements 2 1 ( ⁇ in column 2, ......, (as in ⁇
- the element of the second row and the first column, "16, 16 is the element of the 16th row and the 16th column, and the horizontal intermediate frequency information image, the vertical intermediate frequency information image or the diagonal intermediate frequency information image is referred to as an intermediate frequency information image, and is calculated.
- C to represent the x, y or z channel of an IF information image of the test model, C represents the same channel of the corresponding intermediate frequency information image of the library set model, where
- g is from the library set model, with "and; 5 means ( ⁇ and the number of rows of elements and columns ⁇ -1, ⁇ -1 C a
- ⁇ , ⁇ - ⁇ L ⁇ , ⁇ ⁇ , + ⁇ represents 3x3 pixel adjacent S ss
- the domain, the element c a g P is the central element of the 3 ⁇ 3 pixel neighborhood, and the structural similarity of ⁇ and c a g p (aH
- the invention proposes a three-dimensional face recognition method for calculating the similarity between the test model and the library set model by using the wavelet domain structured similarity algorithm as the expression change feature in the three-dimensional face recognition.
- the face information described by the 3D face model includes three parts: the overall information representing the appearance of the face contour, the detailed information representing the features of the face itself, and the noise information representing the fine texture of the face surface.
- the outline appearance that is, the overall information will be greatly deformed
- the feature of the face itself that is, the detail information will not change. So we choose the details of the 3D face as the expression of the expression, put the test The matching of the model and the library set model is transformed into a match between the expression invariant features.
- the present invention maps a three-dimensional face model to a geometric image, and then uses a multi-scale Haar wavelet filter to decompose the geometric image into sub-images containing information of different frequency bands, wherein the information in the middle frequency band corresponds to the detailed information, which will include the information of the middle frequency band.
- the sub-image is extracted as an invariant feature of the three-dimensional human face, and is called an intermediate frequency information image.
- the similarity degree of the intermediate frequency information image of the test model and the library set model is calculated by the wavelet domain structured similarity algorithm. The similarity between the test model and each library set model of the 3D face database set is compared, and the library set with the largest similarity is determined.
- the model belongs to the same individual as the test model.
- the invention Converts the shape information of the three-dimensional face to the frequency domain, and extract the intermediate frequency information image as an expression invariant feature.
- the invention converts the face information into the frequency domain by the multi-scale Haar wavelet filter, and decomposes the frequency domain information into frequency bands that do not overlap each other, wherein the information of the low frequency band corresponds to the overall information of the human face, and the information of the middle frequency band corresponds to The details of the face, the information of the high frequency band corresponds to the noise information. Since the detail information of the face has a high degree of identity discrimination and the expression is invariant, the present invention extracts the sub-image including the intermediate frequency information as a feature of three-dimensional face recognition, and is called an intermediate frequency information image.
- the multi-scale Hal wavelet filter can generate intermediate frequency information images in horizontal, vertical and diagonal directions, wherein the horizontal intermediate frequency information image reflects the edge information of the face in the horizontal direction, reflecting the horizontal features of the eyes and the mouth; the vertical intermediate frequency The information image contains the edge information of the vertical direction of the face, which embodies the vertical features such as the nose of the face; the diagonal intermediate frequency information image maintains the edge information of the diagonal direction of the face.
- the horizontal, vertical and diagonal intermediate frequency information images are used together as the invariant features of the three-dimensional human face, which can comprehensively capture and represent the detailed information of the three-dimensional human face, and has strong expression robustness.
- the wavelet domain structural similarity algorithm is used to calculate the similarity.
- the wavelet domain structure similarity algorithm is a generalization and improvement of the structural similarity algorithm in the wavelet domain. This algorithm inherits the advantages of the structural similarity algorithm and is more suitable for the similarity calculation of the intermediate frequency information image in the wavelet domain.
- the structural similarity algorithm quantitatively calculates the structural information difference of the image to be matched according to the way the human visual system senses the image.
- the invention calculates the structural similarity of the horizontal, vertical and diagonal intermediate frequency information images of the test model and the library set model in the wavelet domain, and judges the identity of the test face according to the sum of the similarities of the intermediate frequency information images.
- the present invention separately calculates the intermediate frequency information images of the test model.
- the local structural similarity of the pixel corresponding to the corresponding intermediate frequency information image of the library set model is the same, and finally the average value of each local similarity is used as the similarity of the corresponding intermediate frequency information image.
- the wavelet domain structural similarity algorithm used in the present invention can obtain the recognition result conforming to the perception habit of the human visual system, and improves the recognition accuracy of the three-dimensional face recognition system to some extent.
- FIG. 1 is a flow chart of a three-dimensional face recognition method according to the present invention
- Figure 3 is a smooth upper half face
- Figure 4 is the upper half of the face after the point cloud is diluted.
- Figure 5 is a parametric grid
- Figure 6 is a grayscale image of a geometric image.
- Figure 7 is a schematic diagram of multi-scale wavelet filtering
- Figure 8 is a horizontal, vertical and diagonal intermediate frequency information image
- Figure 9 is a schematic diagram of the identification method
- Figure 10 is a color map of a geometric image
- the programming implementation tool uses Matlab R2009a.
- the experimental data comes from the FRGC v2.0 3D face database, which is collected by the University of Notre Dame.
- the database includes 4,007 3D face models of 466 people, mainly collected in the fall of 2003 and spring 2004.
- each person's first three-dimensional face is used as a library set model, and the rest are used as test models;
- FIG. 1 is a flow chart of a three-dimensional face recognition method according to the present invention.
- Figure 5 is a parametric grid.
- the pre-processed 3D face mesh is meshed and mapped to a 2D mesh of 512 x 512 pixels on the plane.
- Figure 6 is the gray of the geometric image.
- Degree map the three-dimensional coordinates of the face mesh vertices are attached to the corresponding vertices of the parametric mesh, and then the linear interpolation method is used to determine the properties of each pixel in the regular quadrilateral region, and a two-dimensional geometric image with three-dimensional coordinate attributes is obtained. That is, the geometric image of the face, this figure shows the geometric image in the form of a grayscale image;
- Figure 7 is a schematic diagram of multi-scale Hal wavelet filtering.
- the geometric transformation is sequentially performed by row transformation and column transformation using a Haar transformation matrix to obtain a set of low-frequency coefficients and a set of high-frequency coefficients in horizontal, vertical and diagonal directions.
- the set of low-frequency coefficients is again subjected to Haar wavelet filtering, outputting a new set of low-frequency coefficients and new horizontal and vertical
- Figure 8 is a horizontal, vertical, and diagonal intermediate frequency information image.
- FIG. 9 is a schematic diagram of the identification method for a test model and a set of model library, with each test model similarity calculation model library set, judged to have the test model library model set the maximum similarity Belong to the same individual;
- Figure 10 is a color diagram of a geometric image. Each pixel of the geometric image has three-dimensional coordinates X, y, and Z attributes. This figure uses the x, y, and z attributes as the RGB attributes of the color image, and displays the geometric image as a color map. This figure is the same as Figure 6. Geometric image.
- the processing steps for the test model and the library set model include a pre-processing step, an intermediate frequency information image extraction step, a wavelet domain structure similarity calculation step, and an identification step.
- Step 1 preprocesses the test model and the library set model respectively, and the preprocessing is:
- the position of the tip of the nose is determined according to the shape index and the geometric constraint of the shape point cloud of the face point cloud.
- the shape index 5 (M) of any point tv on the face point cloud is determined by its maximum principal curvature / ⁇ ( ) and the minimum principal curvature /r 2 ( ):
- the shape index feature can indicate the degree of convexity and concavity in the neighborhood where a point is located. The more convex shape corresponds to the shape index.
- the centroid position of the face point cloud is calculated, and a connected area closest to the centroid position is selected as the nose tip area in the nose tip candidate area. Select the center of mass of the tip of the nose as the tip of the nose.
- the ball With the tip of the nose as the center of the ball, the ball is made of a radius of 90 mm, and the point falling outside the sphere is discarded, and the point inside the sphere is reserved as the face area for subsequent processing.
- Step 1.2 Smoothing the surface of the face
- the face cloud of the cut face is corrected by Principal Component Analysis (PCA) method.
- PCA Principal Component Analysis
- Three principal axis directions are obtained by principal component analysis.
- the nose point is used as the origin, and the largest feature value is selected.
- the feature vector is used as the axis, and the feature vector corresponding to the minimum feature value is used as the Z axis, and the right hand coordinate system is established, and the right hand coordinate system is used as the spatial three-dimensional coordinate system.
- Each point in the face point cloud is represented by x, y in the coordinate system.
- the z coordinate is uniquely represented.
- the face point cloud in the spatial 3D coordinate system is projected onto the XOY plane, and then the 2D meshing operation of the projected point cloud is performed, and the surface reconstruction is performed by the 2.5-dimensional meshing algorithm of the point cloud to obtain an approximate representation of the face surface.
- V w w v.
- Step 1.3 Cutting the upper half of the face
- the spatial sampling method is used to dilute the points of the upper half of the face. This method of diluting data is simple and effective, and the number of points can be reduced without distortion, and a point where the space is relatively uniform can be obtained.
- the spatial separation distance ⁇ of the present invention is lmm.
- the specific dilution method is as follows:
- the diluted upper half of the face model is again triangulated to generate 7; triangular faces.
- Calculate and save the edge lengths of each spatial triangle in the generated 3D face mesh ⁇ , ⁇ ⁇ , ⁇ , 1 1, 2, -, ⁇ , all triangle faces
- the average value of the side length of the sheet is f. If there are edges with a length greater than 4f in the triangular patch, the triangular patch is discarded, and the vertices of the triangular patch are retained.
- the preprocessing process transforms the test model and the library set model into a three-dimensional face mesh with the same smoothness and intensity.
- Step 2.1 Map the point cloud coordinate information of the test model and the library set model face to the plane respectively, and form the geometric image of the test model and the library set model respectively.
- the method for obtaining the geometric image is as follows:
- the plane mesh ⁇ is obtained, and any vertex of the regular quadrilateral on the plane is taken as the origin, and the direction of the two sides intersecting the origin is the positive direction, and the counterclockwise coordinate system MON is established, and any point on the plane is m.
- the n coordinate is uniquely represented.
- b points are uniformly sampled from the origin in the counterclockwise direction, and the coordinates of the sampling point are
- Step 2.1.2 Generate a geometric image
- the face mesh vertices / 3 ⁇ 4 (x 3 ⁇ 4, 3 3 ⁇ 4) additional three-dimensional coordinates of the corresponding point ( ") on a point (" ⁇ ) attributes, and then determining the linear interpolation of each pixel within the area of a regular tetragon
- the attribute of the point gives a two-dimensional image with a three-dimensional coordinate attribute called a geometric image G.
- Step 2.2 Filter the geometric image G of the test model and the library set model respectively, and obtain the intermediate frequency information of the test model and the library set model.
- the filtering of the geometric image adopts the following methods:
- Step 2.1.1.1 Divide the geometric image G of size 512x512 pixels into blocks of 2x2 pixels size, Haar wavelet filtering is applied to each block by the Haar transformation matrix H. Use AG
- o u , a 12 , a 21 , and ⁇ 22 are elements in the block.
- o u , a 12 , a 21 , and ⁇ 22 are elements in the block.
- the row transformation and column transformation are performed on A in turn:
- a 21 a 22 ⁇ is the block after Haar wavelet filtering, which is the low frequency approximation coefficient of block ⁇ , 2 is the horizontal high frequency component of block ⁇ , i is the vertical high frequency component of block A, 2 is The diagonal high frequency component of block A.
- the low-frequency approximation coefficients of all the blocks are arranged in the order of the block to which they belong, forming a set ZA of low-frequency coefficients, and the horizontal high-frequency components are arranged in the order of the block to which they belong.
- the set of horizontal high frequency coefficients HL 1 ; the vertical high frequency components are arranged in the order of the block to which they belong, and the set of vertical high frequency coefficients of the water is arranged.
- the diagonal high frequency components are arranged in the order of the block to which they belong, forming a diagonal high frequency.
- the Haar wavelet filtering is performed again on the low frequency coefficient set, and the low frequency coefficient set and the horizontal, vertical and diagonal high frequency coefficient sets of the second filtering are output, and are respectively recorded as LL 2 , HL 2 , LH 2 and HH 2 .
- the loop filtering is performed 5 times in this way, each time filtering the low-frequency coefficient set of the previous filtered output as an input, and outputting a new low-frequency coefficient set and a horizontal, vertical, and diagonal high-frequency coefficient set.
- Step 2.2.2 Extracting the intermediate frequency information image
- the 3D face mesh is converted into a horizontal and vertical intermediate frequency information image and a diagonal intermediate frequency information image, and the matching of the test model and the library set model is converted into a matching of the corresponding intermediate frequency information image.
- Step 3 Calculate the similarity between the test model and the library set model by using the wavelet domain structured similarity algorithm.
- the calculation method is as follows:
- Step 3.1 Calculate the similarity S HL of the horizontal IF information image of the test model and the horizontal IF information image of the library set model, the similarity H of the vertical IF information image of the test model and the vertical IF information image of the library set model, test model The similarity between the diagonal intermediate frequency information image and the diagonal intermediate frequency information image of the library set model ⁇ ⁇ , will ⁇ ⁇ ,
- S LH and S HH are added as the similarity between the test model and the library set model, and the ⁇ ⁇ , S LH , and S HH respectively use the horizontal intermediate frequency information image to be matched, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image.
- the wavelet domain structured similarity algorithm is used, and the wavelet domain structured similarity algorithm is:
- Step 3.1.1 According to the three attributes of X, y, Z of each pixel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the diagonal intermediate frequency information image, the horizontal intermediate frequency information image, the vertical intermediate frequency information image and the pair respectively
- the X attributes of all the pixels of the angular intermediate frequency information image are arranged in the order of the pixels to which they belong, and constitute the X channel of the horizontal intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image, respectively, and so on, respectively, and are horizontally constructed.
- the y channel and z channel of the intermediate frequency information image, the vertical intermediate frequency information image, and the diagonal intermediate frequency information image are recorded as:
- x, y or z expressed as x channel, y channel or z channel
- ⁇ is (the first line of the ⁇ , c 1 2 is the element of the first column and the second column of ⁇ , ..., c 2 1 is the element of the 2nd row and 1st column of ⁇
- "16,16 is the element of the 16th row and the 16th column
- the horizontal intermediate frequency information image, the vertical intermediate frequency information image or the diagonal intermediate frequency information image is called
- the similarity S w , constructive or S , the ⁇ , s are obtained by the following methods:
- Use C to represent the x, y or z channel of an IF information image of the test model, Representing the same channel of the corresponding intermediate frequency information image of the library set model, where, p table
- g represents the model from the library set, with ⁇ and ( ( ) and the number of rows and columns of the elements in the ⁇ , in the 3x3 pixel neighborhood, the element is ( ⁇
- element c a g P is the central element of the 3 ⁇ 3 pixel neighborhood in , , and the structural similarity of ⁇ and c a g p (aH
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020137001007A KR101314131B1 (ko) | 2011-12-21 | 2012-02-28 | 기하학적 이미지 중의 중파 정보에 기반한 삼차원 얼굴 식별방법 |
US14/364,280 US9117105B2 (en) | 2011-12-21 | 2012-02-28 | 3D face recognition method based on intermediate frequency information in geometric image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110431073.2 | 2011-12-21 | ||
CN2011104310732A CN102592136B (zh) | 2011-12-21 | 2011-12-21 | 基于几何图像中中频信息的三维人脸识别方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013091304A1 true WO2013091304A1 (zh) | 2013-06-27 |
Family
ID=46480746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2012/071728 WO2013091304A1 (zh) | 2011-12-21 | 2012-02-28 | 基于几何图像中中频信息的三维人脸识别方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9117105B2 (zh) |
KR (1) | KR101314131B1 (zh) |
CN (1) | CN102592136B (zh) |
WO (1) | WO2013091304A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520215A (zh) * | 2018-03-28 | 2018-09-11 | 电子科技大学 | 基于多尺度联合特征编码器的单样本人脸识别方法 |
CN113362465A (zh) * | 2021-06-04 | 2021-09-07 | 中南大学 | 非刚性三维形状逐点对应方法及人体心脏运动仿真方法 |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160027172A1 (en) | 2012-04-04 | 2016-01-28 | James G. Spahn | Method of Monitoring the Status of a Wound |
KR102094723B1 (ko) * | 2012-07-17 | 2020-04-14 | 삼성전자주식회사 | 견고한 얼굴 표정 인식을 위한 특징 기술자 |
US8437513B1 (en) * | 2012-08-10 | 2013-05-07 | EyeVerify LLC | Spoof detection for biometric authentication |
CN102902967B (zh) * | 2012-10-16 | 2015-03-11 | 第三眼(天津)生物识别科技有限公司 | 基于人眼结构分类的虹膜和瞳孔的定位方法 |
WO2014126213A1 (ja) * | 2013-02-15 | 2014-08-21 | Necソリューションイノベータ株式会社 | 類似判断の候補配列情報の選択装置、選択方法、およびそれらの用途 |
CN103489011A (zh) * | 2013-09-16 | 2014-01-01 | 广东工业大学 | 一种具有拓扑鲁棒性的三维人脸识别方法 |
EP3084682B1 (en) * | 2013-12-19 | 2019-07-24 | Avigilon Fortress Corporation | System and method for identifying faces in unconstrained media |
RU2014111792A (ru) * | 2014-03-27 | 2015-10-10 | ЭлЭсАй Корпорейшн | Процессор изображений, содержащий систему распознавания лиц на основании преобразования двухмерной решетки |
CN104112115A (zh) * | 2014-05-14 | 2014-10-22 | 南京国安光电科技有限公司 | 一种三维人脸检测与识别技术 |
CN104474709A (zh) * | 2014-11-24 | 2015-04-01 | 苏州福丰科技有限公司 | 一种基于三维人脸识别的游戏方法 |
CN104504410A (zh) * | 2015-01-07 | 2015-04-08 | 深圳市唯特视科技有限公司 | 基于三维点云的三维人脸识别装置和方法 |
CN104573722A (zh) * | 2015-01-07 | 2015-04-29 | 深圳市唯特视科技有限公司 | 基于三维点云的三维人脸种族分类装置和方法 |
CN106485186B (zh) * | 2015-08-26 | 2020-02-18 | 阿里巴巴集团控股有限公司 | 图像特征提取方法、装置、终端设备及系统 |
CN106447624A (zh) * | 2016-08-31 | 2017-02-22 | 上海交通大学 | 一种基于l0范数的三维网格去噪方法 |
CN107958489B (zh) * | 2016-10-17 | 2021-04-02 | 杭州海康威视数字技术股份有限公司 | 一种曲面重建方法及装置 |
KR20180065135A (ko) * | 2016-12-07 | 2018-06-18 | 삼성전자주식회사 | 셀프 구조 분석을 이용한 구조 잡음 감소 방법 및 장치 |
US10860841B2 (en) | 2016-12-29 | 2020-12-08 | Samsung Electronics Co., Ltd. | Facial expression image processing method and apparatus |
CN107358655B (zh) * | 2017-07-27 | 2020-09-22 | 秦皇岛燕大燕软信息系统有限公司 | 基于离散平稳小波变换的半球面和圆锥面模型的辨识方法 |
US10861196B2 (en) | 2017-09-14 | 2020-12-08 | Apple Inc. | Point cloud compression |
US11818401B2 (en) | 2017-09-14 | 2023-11-14 | Apple Inc. | Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables |
US10909725B2 (en) | 2017-09-18 | 2021-02-02 | Apple Inc. | Point cloud compression |
US11113845B2 (en) | 2017-09-18 | 2021-09-07 | Apple Inc. | Point cloud compression using non-cubic projections and masks |
CN107748871B (zh) * | 2017-10-27 | 2021-04-06 | 东南大学 | 一种基于多尺度协方差描述子与局部敏感黎曼核稀疏分类的三维人脸识别方法 |
KR101958194B1 (ko) * | 2017-11-30 | 2019-03-15 | 인천대학교 산학협력단 | 변환 도메인의 대조비 개선 장치 및 방법 |
CN108734772A (zh) * | 2018-05-18 | 2018-11-02 | 宁波古德软件技术有限公司 | 基于Kinect fusion的高精度深度图像获取方法 |
US11017566B1 (en) | 2018-07-02 | 2021-05-25 | Apple Inc. | Point cloud compression with adaptive filtering |
CN109145716B (zh) * | 2018-07-03 | 2019-04-16 | 南京思想机器信息科技有限公司 | 基于脸部识别的登机口检验平台 |
US11202098B2 (en) | 2018-07-05 | 2021-12-14 | Apple Inc. | Point cloud compression with multi-resolution video encoding |
US11367224B2 (en) | 2018-10-02 | 2022-06-21 | Apple Inc. | Occupancy map block-to-patch information compression |
CN109379511B (zh) * | 2018-12-10 | 2020-06-23 | 盎锐(上海)信息科技有限公司 | 3d数据安全加密算法及装置 |
CN109978989B (zh) * | 2019-02-26 | 2023-08-01 | 腾讯科技(深圳)有限公司 | 三维人脸模型生成方法、装置、计算机设备及存储介质 |
CN109919876B (zh) * | 2019-03-11 | 2020-09-01 | 四川川大智胜软件股份有限公司 | 一种三维真脸建模方法及三维真脸照相系统 |
US11030801B2 (en) * | 2019-05-17 | 2021-06-08 | Standard Cyborg, Inc. | Three-dimensional modeling toolkit |
US11074753B2 (en) * | 2019-06-02 | 2021-07-27 | Apple Inc. | Multi-pass object rendering using a three- dimensional geometric constraint |
US11711544B2 (en) | 2019-07-02 | 2023-07-25 | Apple Inc. | Point cloud compression with supplemental information messages |
US10853631B2 (en) | 2019-07-24 | 2020-12-01 | Advanced New Technologies Co., Ltd. | Face verification method and apparatus, server and readable storage medium |
CN110675413B (zh) * | 2019-09-27 | 2020-11-13 | 腾讯科技(深圳)有限公司 | 三维人脸模型构建方法、装置、计算机设备及存储介质 |
US11409998B2 (en) * | 2019-10-02 | 2022-08-09 | Apple Inc. | Trimming search space for nearest neighbor determinations in point cloud compression |
US11895307B2 (en) | 2019-10-04 | 2024-02-06 | Apple Inc. | Block-based predictive coding for point cloud compression |
CN111027515A (zh) * | 2019-12-24 | 2020-04-17 | 神思电子技术股份有限公司 | 一种人脸库照片更新方法 |
CN111079701B (zh) * | 2019-12-30 | 2023-03-24 | 陕西西图数联科技有限公司 | 一种基于图像质量的人脸防伪方法 |
US11593967B2 (en) | 2020-01-08 | 2023-02-28 | Samsung Electronics Co., Ltd. | Attribute transfer in V-PCC |
US11798196B2 (en) | 2020-01-08 | 2023-10-24 | Apple Inc. | Video-based point cloud compression with predicted patches |
CN111402401B (zh) * | 2020-03-13 | 2023-08-18 | 北京华捷艾米科技有限公司 | 一种获取3d人脸数据方法、人脸识别方法及装置 |
CN111640055B (zh) * | 2020-05-22 | 2023-04-11 | 构范(厦门)信息技术有限公司 | 一种二维人脸图片变形方法及系统 |
CN112418030B (zh) * | 2020-11-11 | 2022-05-13 | 中国标准化研究院 | 一种基于三维点云坐标的头面部号型分类方法 |
CN112785615A (zh) * | 2020-12-04 | 2021-05-11 | 浙江工业大学 | 一种基于扩展二维经验小波变换的工程表面多尺度滤波方法 |
CN112686230B (zh) * | 2021-03-12 | 2021-06-22 | 腾讯科技(深圳)有限公司 | 对象识别方法、装置、设备以及存储介质 |
CN113111548B (zh) * | 2021-03-27 | 2023-07-21 | 西北工业大学 | 一种基于周角差值的产品三维特征点提取方法 |
US11948338B1 (en) | 2021-03-29 | 2024-04-02 | Apple Inc. | 3D volumetric content encoding using 2D videos and simplified 3D meshes |
CN113409377B (zh) * | 2021-06-23 | 2022-09-27 | 四川大学 | 一种基于跳跃连接式生成对抗网络的相位展开方法 |
CN113610039B (zh) * | 2021-08-17 | 2024-03-15 | 北京融合汇控科技有限公司 | 基于云台相机的风漂异物识别方法 |
CN113902781A (zh) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | 三维人脸重建方法、装置、设备及介质 |
CN114296050B (zh) * | 2022-03-07 | 2022-06-07 | 南京鼐云信息技术有限责任公司 | 基于激光雷达云图探测的光伏电站短期发电功率预测方法 |
CN114821720A (zh) * | 2022-04-25 | 2022-07-29 | 广州瀚信通信科技股份有限公司 | 人脸检测方法、装置、系统、设备及存储介质 |
CN116226426B (zh) * | 2023-05-09 | 2023-07-11 | 深圳开鸿数字产业发展有限公司 | 基于形状的三维模型检索方法、计算机设备和存储介质 |
CN117974817B (zh) * | 2024-04-02 | 2024-06-21 | 江苏狄诺尼信息技术有限责任公司 | 基于图像编码的三维模型纹理数据高效压缩方法及系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020159627A1 (en) * | 2001-02-28 | 2002-10-31 | Henry Schneiderman | Object finder for photographic images |
US20090310828A1 (en) * | 2007-10-12 | 2009-12-17 | The University Of Houston System | An automated method for human face modeling and relighting with application to face recognition |
CN101986328A (zh) * | 2010-12-06 | 2011-03-16 | 东南大学 | 一种基于局部描述符的三维人脸识别方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60216411T2 (de) | 2001-08-23 | 2007-10-04 | Sony Corp. | Robotervorrichtung, gesichtserkennungsverfahren und gesichtserkennungsvorrichtung |
KR100608595B1 (ko) | 2004-11-16 | 2006-08-03 | 삼성전자주식회사 | 얼굴 인식 방법 및 장치 |
KR100723417B1 (ko) | 2005-12-23 | 2007-05-30 | 삼성전자주식회사 | 얼굴 인식 방법, 그 장치, 이를 위한 얼굴 영상에서 특징추출 방법 및 그 장치 |
CN100409249C (zh) * | 2006-08-10 | 2008-08-06 | 中山大学 | 一种基于网格的三维人脸识别方法 |
CN101261677B (zh) * | 2007-10-18 | 2012-10-24 | 周春光 | 人脸的特征提取方法 |
CN101650777B (zh) * | 2009-09-07 | 2012-04-11 | 东南大学 | 一种基于密集点对应的快速三维人脸识别方法 |
-
2011
- 2011-12-21 CN CN2011104310732A patent/CN102592136B/zh not_active Expired - Fee Related
-
2012
- 2012-02-28 WO PCT/CN2012/071728 patent/WO2013091304A1/zh active Application Filing
- 2012-02-28 KR KR1020137001007A patent/KR101314131B1/ko active IP Right Grant
- 2012-02-28 US US14/364,280 patent/US9117105B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020159627A1 (en) * | 2001-02-28 | 2002-10-31 | Henry Schneiderman | Object finder for photographic images |
US20090310828A1 (en) * | 2007-10-12 | 2009-12-17 | The University Of Houston System | An automated method for human face modeling and relighting with application to face recognition |
CN101986328A (zh) * | 2010-12-06 | 2011-03-16 | 东南大学 | 一种基于局部描述符的三维人脸识别方法 |
Non-Patent Citations (1)
Title |
---|
CAI, LIANG ET AL.: "Three dimensions face recognition by using shape filtering and geometry image", JOURNAL OF IMAGE AND GRAPHICS, vol. 16, no. 7, July 2011 (2011-07-01), pages 1303 - 1309 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520215A (zh) * | 2018-03-28 | 2018-09-11 | 电子科技大学 | 基于多尺度联合特征编码器的单样本人脸识别方法 |
CN113362465A (zh) * | 2021-06-04 | 2021-09-07 | 中南大学 | 非刚性三维形状逐点对应方法及人体心脏运动仿真方法 |
CN113362465B (zh) * | 2021-06-04 | 2022-07-15 | 中南大学 | 非刚性三维形状逐点对应方法及人体心脏运动仿真方法 |
Also Published As
Publication number | Publication date |
---|---|
CN102592136A (zh) | 2012-07-18 |
KR20130084654A (ko) | 2013-07-25 |
US9117105B2 (en) | 2015-08-25 |
US20140355843A1 (en) | 2014-12-04 |
CN102592136B (zh) | 2013-10-16 |
KR101314131B1 (ko) | 2013-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013091304A1 (zh) | 基于几何图像中中频信息的三维人脸识别方法 | |
CN110348330B (zh) | 基于vae-acgan的人脸姿态虚拟视图生成方法 | |
US10083366B2 (en) | Edge-based recognition, systems and methods | |
US7512255B2 (en) | Multi-modal face recognition | |
JP4445864B2 (ja) | 三次元顔認識 | |
JP2021507394A (ja) | 多特徴検索と変形に基づく人体髪型の生成方法 | |
WO2022041627A1 (zh) | 一种活体人脸检测方法及系统 | |
WO2017059591A1 (zh) | 手指静脉识别方法及装置 | |
CN111091075B (zh) | 人脸识别方法、装置、电子设备及存储介质 | |
WO2015067084A1 (zh) | 人眼定位方法和装置 | |
WO2012126135A1 (en) | Method of augmented makeover with 3d face modeling and landmark alignment | |
CN103971122B (zh) | 基于深度图像的三维人脸描述方法 | |
CN109766866B (zh) | 一种基于三维重建的人脸特征点实时检测方法和检测系统 | |
CN102779269A (zh) | 基于图像传感器成像系统的人脸识别算法 | |
WO2018133119A1 (zh) | 基于深度相机进行室内完整场景三维重建的方法及系统 | |
JP5018029B2 (ja) | 認証システム及び認証方法 | |
CN106415606B (zh) | 一种基于边缘的识别、系统和方法 | |
Bastias et al. | A method for 3D iris reconstruction from multiple 2D near-infrared images | |
CN109074471B (zh) | 一种基于主动外观模型的虹膜区域分割方法及装置 | |
CN108090460B (zh) | 基于韦伯多方向描述子的人脸表情识别特征提取方法 | |
Stylianou et al. | Image based 3d face reconstruction: a survey | |
Kong et al. | Effective 3d face depth estimation from a single 2d face image | |
Cheng et al. | Tree skeleton extraction from a single range image | |
CN111753652B (zh) | 一种基于数据增强的三维人脸识别方法 | |
Ramadan et al. | 3D Face compression and recognition using spherical wavelet parametrization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20137001007 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12858733 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14364280 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12858733 Country of ref document: EP Kind code of ref document: A1 |