CN106874850A - One kind is based on three-dimensional face point cloud characteristic point positioning method - Google Patents

One kind is based on three-dimensional face point cloud characteristic point positioning method Download PDF

Info

Publication number
CN106874850A
CN106874850A CN201710018786.3A CN201710018786A CN106874850A CN 106874850 A CN106874850 A CN 106874850A CN 201710018786 A CN201710018786 A CN 201710018786A CN 106874850 A CN106874850 A CN 106874850A
Authority
CN
China
Prior art keywords
point
cloud
rotation
image
rotation image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710018786.3A
Other languages
Chinese (zh)
Inventor
张灵
朱思豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201710018786.3A priority Critical patent/CN106874850A/en
Publication of CN106874850A publication Critical patent/CN106874850A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Invention demonstrates a method a three-dimensional face point cloud characteristic point positioning method.First, to the pretreated face cloud data of bilateral filtering denoising, by HK (average curvature and Gaussian curvature), this curve form describes method, mark off human face characteristic point candidate region, then the rotation image (Spin Image) for being formed to characteristic point is compared, so as to realize the positioning of the prenasale under any attitude and left and right inner eye corner point.In addition, original rotation image algorithm amount of calculation is too big, operation time is long and cannot obtain practical application, herein on the basis of former algorithm, region surface reconstruction only is carried out to a small amount of Important Characteristic Points for screening, a large amount of insignificant point processings are avoided, the real-time performance of rotation image algorithm is improve.To GavabDB databases test result indicate that, the method highest obtains 95.37% discrimination, while having certain robustness to attitude, expression shape change.Above step is whole position fixing process.

Description

One kind is based on three-dimensional face point cloud characteristic point positioning method
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of face point cloud characteristic point positioning method can be used for Three-dimensional face is recognized.
Background technology
Cloud data is that the discrete data that sampling is obtained is scanned to real-world object surface, and curvature is the important office of curved surface Portion's geometric attribute, so, curvature can be used in the analysis and identification of three-dimensional face point cloud characteristic point.In recent years, domestic and foreign scholars Various methods have been proposed in terms of three-dimensional point cloud curvature.Li Qian summarizes present Research both at home and abroad about cloud data, Processing Method of Point-clouds of the primary study based on curvature characteristic information.The calculating that Jiang Jianguo passes through local surface curvature, and With reference to gray feature information, prenasale, saddle point and left and right pupil are located.Gangul y are with principal curvatures, Gaussian curvature, most Big and minimum curvature etc. recognizes curvature for three-dimensional face in the superiority of curve form description.In terms of positioning feature point, Prenasale and inner eye corner point are realized by improving the statistical model of (the Local Shape Map) of local shape figure in king honey palace It is accurately positioned.Wang Jinjiang is sub with a kind of description of 3D local surfaces shape --- point feature (Po int Si gnature), it is real The feature point extraction of existing cloud data.Miao Yongwei is based on the analysis that threedimensional model surface vertices local neighborhood rotates image, proposes A kind of measure of 3D shape surface local similarity.By the com-parison and analysis to disclosed algorithm, this paper presents one The method for planting the face point cloud feature location based on Sp in Image.The method achieve the three-dimensional under any expression and attitude The positioning of face point cloud prenasale and left and right inner eye corner point.
The content of the invention
The method for expressing of image is rotated, is the geometric position the oriented point of benchmark on three-dimensional object surface and other oriented points Transformation is a width two dimensional image, is easy to characterize the feature of object.Rotation image method by support distances and support angle come Control the quantity of global covering point to describe the curve form in certain area, and reduce mixed and disorderly and block to image certainly Influence, has proved to be an exact method of three-dimensional surface identification, but to calculate rotation image a little, operand is big, Elapsed time is long.Inspired by this, realize being accurately positioned for characteristic point using image is rotated herein, in order to reduce calculation cost, Before rotation image is calculated, first pass through HK (average curvature and Gaussian curvature) curve form and describe local shape of the method to a cloud Shape is classified, and obtains characteristic point candidate region.According to priori, three characteristic point (noses are marked by hand in candidate region Point and left and right inner eye corner point), feature point template of the rotation image that then these three characteristic points are formed as this paper.Finally, Similarity system design is carried out by the rotation image and feature point template of other cloud candidate feature points, so as to realize face point cloud Prenasale and left and right inner eye corner point location.
The purpose of the present invention is achieved through the following technical solutions:
S1:Because face cloud data inevitably includes more noise spot during collection, so that can shadow Ring to face point cloud positioning feature point effect, therefore use the preprocess method of the noise filtering of bilateral filtering, carry out a cloud number According to pretreatment, computing formula is:
V′←V+d·n
Wherein V is any one summit in a cloud, and d can be referred to as bilateral filtering weight factor, and n is current vertex V's Normal orientation.
S2:By HK (average curvature and Gaussian curvature), this curve form describes method, to pretreated cloud number According to marking off human face characteristic point candidate region.The computing formula of HK is:
Ix, IyIt is along the single order local derviation in x and y directions, Ixy, IxxAnd IyyIt is corresponding second order local derviation.By average curvature The classification of vertex type is carried out with the sign of Gaussian curvature.
S3:To characteristic point candidate regions point generation rotation image (Spin Image).Rotation image is a kind of three-dimensional article body surface The global registration technology in face, takes any point on three-dimensional face point cloud model, and the normal vector for calculating the point obtains section, then will Remaining a little projected to the section of the point, the two-dimensional histogram of formation is then the rotation image of the point.Rotation image generation Process determine that it has that the comprehensive geometry of three-dimension curved surface is representational, computing formula is:
SO:R3→R2
S4:The rotation of the prenasale and left and right inner eye corner point of prior manual markings template is represented with linearly dependent coefficient R The rotation image similarity degree that image and characteristic point are formed, so as to realize the prenasale under any attitude and left and right inner eye corner point Positioning.Because linearly dependent coefficient is calculated using pixel quantity, the quantity that rotation image is overlapped will influence phase relation Number, the span (- 1,1) of R, wherein, when the value of R is bigger, closer to 1, the two rotation images are more similar, closer to -1, Illustrate that the two rotation images are dissimilar, the threshold value that R is taken herein is 0.9.Computing formula is:
Wherein, P and Q are two rotation images, and N is two quantity of rotation image overlaid pixel, and p and q is rotation image The numerical value of each pixel.
Brief description of the drawings
Fig. 1 represents the rotation example images at face difference, is successively from top to bottom:Forehead point, prenasale, cheek point With chin point.
Fig. 2 represents the characteristic point candidate regions (black portions in figure) generated by HK to face point cloud.Wherein A, B are respectively Oblong fovea form point region at the canthus of left and right, C is oval convex form point region at nose.A () and (b) is to face point cloud according to HK The design sketch of classification, (c) is the feature point template marked according to priori.
Fig. 3 represents that different face point clouds form the comparing of rotation image.Two points are left inside on the two face point clouds in left and right Canthus point and prenasale, for left cheek a bit, the point of right face point cloud bottom is for one in people for the point of Zuoren face point cloud bottom Point.
Fig. 4 represents the similarity-rough set of rotation image.The rotation figure that A, B are generated by the prenasale of different face point clouds Then picture, C is compared A, B and A, C and is rotated the similitude of image by the rotation image a little generated in people, and wherein A, B's is similar It is R=0.914 to spend, and the similarity of A, C is R=0.167.
Fig. 5 illustrates the part positioning result of different attitudes and expression.
Fig. 6 rotates the parameter of image.
Specific embodiment
Experimental situation
Inte l (R) Core (TM) i5CPU dominant frequency 2.67GHz, inside save as 8.0GB.
Win8 operating systems.
It is the GavabDB 3D face databases that A.B.Moreno sets up to test database used herein.The database bag Containing 549 three dimensional face surface images, 61 people (45 males and 16 women) are had, everyone there are 9 different images, Most of ages in them are between 18 to 40 years old.Additionally, human face posture and facial expression are all well-regulated changes, wherein Including 2 amimia positive faces and 4 amimia rotating images, and 3 front face images for having a prominent change expression.Each Picture data content include point three-dimensional coordinate and annexation, extract herein the three-dimensional coordinate in each view data come as The Dian Yunku of experiment.Simulation software MATLAB 2015a.
Have selected herein two groups of cloud datas carry out former rotation image algorithm and set forth herein only candidate regions are revolved Turn the comparing of the run time of image calculating.Under conditions of almost identical accuracy rate, former algorithm spends the time more long, at least needs A hour is wanted, and methods herein substantially reduces run time.
Experiment content
Present invention experiment database used is the GavabDB 3D face databases that A.B.Moreno sets up.The database Comprising 549 three dimensional face surface images, 61 people (45 males and 16 women) are had, everyone there are 9 different figures Picture, the most of ages in them are between 18 to 40 years old.Additionally, human face posture and facial expression are all well-regulated changes, Including 2 amimia positive faces and 4 amimia rotating images, and 3 front face images for having a prominent change expression. Each picture data content includes the three-dimensional coordinate and annexation of point, and the three-dimensional coordinate extracted herein in each view data comes As the Dian Yunku of experiment.Because face cloud data inevitably includes more noise spot during collection, from And face point cloud positioning feature point effect is influenced whether, therefore sought herein is as a kind of preprocess method of robust, data are carried out The positioning of characteristic point is carried out after pretreatment again.
It is of the invention to concretely comprise the following steps:
S1:Face cloud data first to database uses the noise filtering method of bilateral filtering, because bilateral filtering Principle be certain
The shared weight size during a cloud denoising of adjoint point is not by single factor or so, but by it to center The sky of point
Between apart from size and it with depth to center on similarity together decide on, adjoint point and center in so-called space Point depth
On similarity, that is, refer to that adjoint point is big to the upward distance of Central Point Method with the space length vector projection of central point It is small, it is this
Algorithm does not need detailed topology information, and calculates simple, fast operation.It is defined as follows:
V′←V+d·n
Wherein V is any one summit in a cloud, and d is referred to as bilateral filtering weight factor, and n is the normal direction of current vertex V Direction.The key of bilateral filtering process is to obtain the filtering weighting factor so that the summit on point cloud model is moved along normal orientation It is dynamic, and carry out light frontlighting noise in this way.The bilateral filtering weight factor d of point cloud model is defined as follows:
Wherein, N (v) represent summit V all neighborhood points set, | | v-pi | | for current adjoint point to central point v away from From,<n,v-piThe distance between it is the similarity of current adjoint point and central point in depth, that is, current adjoint point and central point In Central Point Method to the projection on n.
S2:Then by HK (average curvature and Gaussian curvature), this curve form describes method, to pretreated point Cloud data mark off human face characteristic point candidate regions.The average value of two of which principal curvatures is mean curvature of surface H, the two of curved surface The product of individual principal curvatures is gaussian curvature of surface K, and the computing formula of HK is as follows:
Ix, IyIt is along the single order local derviation in x and y directions, Ixy, IxxAnd IyyIt is corresponding second order local derviation.By average curvature The classification of vertex type is carried out with the sign of Gaussian curvature.Following table gives 9 kinds of composite types of H and K.
H K Curved surface type Geometric description
>0 >0 Peak Point is locally convex in all directions
>0 =0 Do not exist
>0 <0 Trap Point is locally recessed in all directions
=0 >0 Ridge Point part is convex, is flat in a direction
=0 =0 Plane Plane
=0 <0 Paddy Point part is recessed, is flat in a direction
<0 >0 Saddle type ridge The most of part of point is convex, and fraction is recessed
<0 =0 Minimal surface Concavo-convex distribution half and half
<0 <0 Saddle type paddy The most of part of point is recessed, and fraction is convex
HK classifies to face point cloud, obtains the candidate region of characteristic point.According to prenasale and inner eye corner point in a cloud The priori of concavity and convexity in curved surface, marks prenasale in characteristic point candidate regions and inner eye corner point in left and right is used as spy herein Template a little is levied, shown in (c) in such as Fig. 2.
S3:The template generation of point and characteristic point to characteristic point candidate regions rotates image (Sp in Image).Rotation image It is a kind of global registration technology of three-dimensional object surface, takes any point on three-dimensional face point cloud model, calculates the normal vector of the point Obtain section, then by remaining a little projected to the section of the point, the two-dimensional histogram of formation is then the rotation of the point Image.The process of rotation image generation determines that it has that the comprehensive geometry of three-dimension curved surface is representational, and computing formula is:
SO:R3→R2
Given three-dimension curved surface an O, p are the oriented points on curved surface O, and n is the normal corresponding to p points.P is vertical normal N and the excessively section of p points, straight line L is by p points and parallel to normal n.And other points x rotation map coordinates are:2 points of p, x The distance between place normal vector n α, and point x and section directed distance β, as shown in fig. 6, so, three-dimensional data project to by Mapping function expression formula under the two-dimensional coordinate system that (p, n) determines is SO, it is any on three-dimension curved surface after selected oriented point p The relation of one point x and p is expressed by (α, β), and rotation image is exactly the two dimensional image of (α, β) coordinate of the oriented vertex neighborhood. As shown in figure 1, face point cloud diverse location generates different rotation images.
S4:The rotation of the prenasale and left and right inner eye corner point of prior manual markings template is represented with linearly dependent coefficient R The rotation image similarity degree that image and characteristic point are formed, so as to realize the prenasale under any attitude and left and right inner eye corner point Positioning.We represent two similarity degrees of rotation image with linearly dependent coefficient R, because linearly dependent coefficient is using picture Prime number amount is calculated, and the quantity that rotation image is overlapped will influence coefficient correlation.In equation below, R is expressed as linear correlation Coefficient, two rotation image P and Q, N are two quantity of rotation image overlaid pixel, and p and q is each pixel of rotation image Numerical value.The span (- 1,1) of R, wherein, when the value of R is bigger, closer to 1, the two rotation images are more similar, closer to- 1, illustrate that the two rotation images are dissimilar, the threshold value that R is taken herein is 0.9.Computing formula is:
Table 2 is context of methods and Ganguly and the comparative result of king's honey palace two methods accurate rate.Ganguly is basis Curvature feature in depth image carries out the extraction of characteristic point, by average curvature, maximum curvature combination and Gaussian curvature, averagely Recognition result is compared in curvature combination, but the extraction of feature is only carried out by curvature information, the robustness susceptible of algorithm.King's honey Palace is based on local geometric information, so being influenceed by less expression and attitudes vibration.Further, since the nose of different people Similitude, so in the case of the anglec of rotation is less in the circumferential edge storehouse, herein with the method in king's honey palace to prenasale almost Can be accurately positioned completely.When the rotation of face point cloud is close to 90 °, because the anglec of rotation is too big, nose, the description area of inner eye corner The missing in domain, the accurate rate of three kinds of methods has declined, and results in a feature that position error a little, because king's honey palace is based on point Local shape information carry out positioning feature point, and to be herein the similarity-rough set based on rotation image carry out feature point extraction , so two methods of Ganguly and king's honey palace receive bigger influence.Learnt by the experimental result of table 2, in the database Under, methods herein will be superior to other two methods.
2 three kinds of comparings of algorithm of table
Data type The accurate rate of this paper The accurate rate in king's honey palace The accurate rate of Ganguly
Front change expression 95.37% 81.25% 69.95%
Slight rotation 90.18% 74.91% 47.54%
Close to 90 ° 85.51% 63.36% 18.03%
Operation above is the overall process of whole three-dimensional face features' point location.

Claims (5)

1. one kind illustrates a three-dimensional face point cloud characteristic point positioning method, comprises the following steps:
S1:Because face cloud data inevitably includes more noise spot during collection, so as to influence whether Face point cloud positioning feature point effect, therefore the pretreatment of cloud data is carried out using the preprocess method of bilateral filtering first;
S2:Method is described by average curvature and Gaussian curvature (HK) this curve form, pretreated cloud data is drawn Separate human face characteristic point candidate region;
S3:Finally to characteristic point candidate regions point generation rotation image (Spin Image);
S4:The prenasale of prior manual markings template and the rotation image of left and right inner eye corner point are represented with linearly dependent coefficient R The rotation image similarity degree formed with characteristic point, so as to realize determining for the prenasale under any attitude and left and right inner eye corner point Position.
2. three-dimensional face point cloud characteristic point positioning method is based on as claimed in claim 1, it is characterised in that if people in step S1 Face cloud data using bilateral filtering noise filtering method, then computing formula be:
V′←V+d·n
Wherein V is any one summit in a cloud, and d can be referred to as bilateral filtering weight factor, and n is the normal direction of current vertex V Direction;Main thought of the bilateral filtering in a cloud:The shared weight size during a cloud denoising of certain adjoint point is not by single Factor or so, but have it to the similarity in the space length size of focus point and it and focus point depth, that is, refer to adjoint point With the space length vector projection of focus point to Central Point Method it is upward apart from size;The key of bilateral filtering process is to obtain filter Ripple weight factor so that the summit on point cloud model is moved along normal orientation, and carrys out light frontlighting noise in this way;Point The bilateral filtering weight factor d of cloud model is defined as follows:
d = &Sigma;p i &Element; N ( v ) W c ( | | v - p i | | ) W s ( < n , v - p i > ) < n , v - p i > &Sigma;p i &Element; N ( v ) W c ( | | v - p i | | ) W s ( < n , v - p i > )
Wherein, piCurrent a certain adjoint point is represented, N (v) represents the set of all neighborhood points of summit V, | | v-pi| | it is current adjoint point To the distance of central point v,<n,v-pi>It is the similarity of current adjoint point and central point in depth, that is, current adjoint point is with The distance between heart point is in Central Point Method to the projection on n.Fairing filter function WcIt is the form of Gaussian convolution kernel function, the letter Number represents point and puts the similitude in three dimensions business:
W c ( x ) = exp ( - x 2 2 &sigma; c 2 )
σ in formulacValue be set to 1/2nd of distance between central point and its farthest neighbor point.Feature keeps weighting function WsTable Show the similitude in depth between points, be defined as below:
W s ( y ) = exp ( - y 2 2 &sigma; s 2 )
Wherein σsValue be central point with its all neighborhood point to distance project to the variance of the upward size of Central Point Method.
3. three-dimensional face point cloud characteristic point positioning method is based on as described in claim 1, it is characterised in that HK in step S2 Computing formula be:
H = ( 1 + I x 2 ) I y y - 2 I x I y I x y + ( 1 + I y 2 ) I x x 2 ( 1 + I x 2 + I y 2 ) 3 / 2
K = I x x I y y - I x y 2 ( 1 + I x 2 + I y 2 ) 2
Ix, IyIt is along the single order local derviation in x and y directions, Ixy, IxxAnd IyyIt is corresponding second order local derviation, by average curvature and height The sign of this curvature carries out the classification of vertex type.
4. three-dimensional face point cloud characteristic point positioning method is based on as described in claim 3, it is characterised in that rotating image is A kind of global registration technology of three-dimensional object surface, takes any point on three-dimensional face point cloud model, and the normal vector for calculating the point is obtained To section, then by remaining a little projected to the section of the point, the two-dimensional histogram of formation is then the rotation figure of the point Picture;Rotation image generation process determine that it has the comprehensive geometry of three-dimension curved surface representational, and rotation image to noise not It is sensitive;Given three-dimension curved surface an O, p are the oriented points on curved surface O, and n is the normal corresponding to p points, P be vertical normal n and The section of p points is crossed, straight line L is by p points and parallel to normal n.And other points x rotation map coordinates are:Method at the point of p, x two The distance between vector n α, and point x and section directed distance β, three-dimensional data projects to the two-dimensional coordinate determined by (p, n) Mapping function expression formula under system is SO, to select after oriented point p, the relation of any point x and p on three-dimension curved surface passes through (α, β) is expressed, and rotation image is exactly the two dimensional image of (α, β) coordinate of the oriented vertex neighborhood, and rotation image computing formula is
SO:R3→R2
S O ( x ) &RightArrow; ( &alpha; , &beta; ) = ( | | x - p | | 2 - ( n &CenterDot; ( x - p ) ) 2 , n &CenterDot; ( x - p ) )
5. three-dimensional face point cloud characteristic point positioning method is based on as described in claim 1, it is characterised in that we are with linearly Coefficient R come represent two rotation image similarity degrees, because linearly dependent coefficient is calculated using pixel quantity , the quantity that rotation image is overlapped will influence coefficient correlation, and R is expressed as linearly dependent coefficient, and two rotation image P and Q, N are Two quantity of rotation image overlaid pixel, piAnd qiBe rotation image ith pixel numerical value, R span (- 1, 1), wherein, when the value of R is bigger, closer to 1, the two rotation images it is more similar, closer to -1, illustrate the two rotation images Dissmilarity, it is 0.9 that the threshold value of R is taken herein, and computing formula is:
R ( P , Q ) = N&Sigma;p i q i - &Sigma;p i &Sigma;q i ( N&Sigma;p i 2 - ( &Sigma;p i ) 2 ) ( N&Sigma;q i 2 - ( &Sigma;q i ) 2 )
CN201710018786.3A 2017-01-10 2017-01-10 One kind is based on three-dimensional face point cloud characteristic point positioning method Pending CN106874850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710018786.3A CN106874850A (en) 2017-01-10 2017-01-10 One kind is based on three-dimensional face point cloud characteristic point positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710018786.3A CN106874850A (en) 2017-01-10 2017-01-10 One kind is based on three-dimensional face point cloud characteristic point positioning method

Publications (1)

Publication Number Publication Date
CN106874850A true CN106874850A (en) 2017-06-20

Family

ID=59157372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710018786.3A Pending CN106874850A (en) 2017-01-10 2017-01-10 One kind is based on three-dimensional face point cloud characteristic point positioning method

Country Status (1)

Country Link
CN (1) CN106874850A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021901A (en) * 2017-12-18 2018-05-11 北京小米移动软件有限公司 The method, apparatus and computer-readable recording medium of image procossing
CN108614894A (en) * 2018-05-10 2018-10-02 西南交通大学 A kind of face recognition database's constructive method based on maximum spanning tree
CN109934129A (en) * 2019-02-27 2019-06-25 嘉兴学院 A kind of man face characteristic point positioning method, device, computer equipment and storage medium
CN110211129A (en) * 2019-05-17 2019-09-06 西安财经学院 Low covering point cloud registration algorithm based on region segmentation
CN110321910A (en) * 2018-03-29 2019-10-11 中国科学院深圳先进技术研究院 Feature extracting method, device and equipment towards cloud
CN111177290A (en) * 2019-12-18 2020-05-19 浙江欣奕华智能科技有限公司 Method and device for evaluating accuracy of three-dimensional map
CN111428565A (en) * 2020-02-25 2020-07-17 北京理工大学 Point cloud identification point positioning method and device based on deep learning
CN111460937A (en) * 2020-03-19 2020-07-28 深圳市新镜介网络有限公司 Face feature point positioning method and device, terminal equipment and storage medium
CN112101229A (en) * 2020-09-16 2020-12-18 云南师范大学 Point cloud data feature point extraction method and device, computer equipment and storage medium
CN112435166A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Unmanned aerial vehicle flight strip splicing method and equipment, storage medium and processing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976359A (en) * 2010-09-26 2011-02-16 浙江大学 Method for automatically positioning characteristic points of three-dimensional face
CN102930534A (en) * 2012-10-15 2013-02-13 北京工业大学 Method for automatically positioning acupuncture points on back of human body
CN104091162A (en) * 2014-07-17 2014-10-08 东南大学 Three-dimensional face recognition method based on feature points

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976359A (en) * 2010-09-26 2011-02-16 浙江大学 Method for automatically positioning characteristic points of three-dimensional face
CN102930534A (en) * 2012-10-15 2013-02-13 北京工业大学 Method for automatically positioning acupuncture points on back of human body
CN104091162A (en) * 2014-07-17 2014-10-08 东南大学 Three-dimensional face recognition method based on feature points

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PANAGIOTIS PERAKIS等: "3D Facial Landmark Detection under Large Yaw and Expression Variations", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
袁华等: "基于噪声分类的双边滤波点云去噪算法", 《计算机应用》 *
韩璇: "基于数字图像处理的车牌定位算法的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021901A (en) * 2017-12-18 2018-05-11 北京小米移动软件有限公司 The method, apparatus and computer-readable recording medium of image procossing
CN110321910A (en) * 2018-03-29 2019-10-11 中国科学院深圳先进技术研究院 Feature extracting method, device and equipment towards cloud
CN110321910B (en) * 2018-03-29 2021-05-28 中国科学院深圳先进技术研究院 Point cloud-oriented feature extraction method, device and equipment
CN108614894B (en) * 2018-05-10 2021-07-02 西南交通大学 Face recognition database construction method based on maximum spanning tree
CN108614894A (en) * 2018-05-10 2018-10-02 西南交通大学 A kind of face recognition database's constructive method based on maximum spanning tree
CN109934129A (en) * 2019-02-27 2019-06-25 嘉兴学院 A kind of man face characteristic point positioning method, device, computer equipment and storage medium
CN110211129A (en) * 2019-05-17 2019-09-06 西安财经学院 Low covering point cloud registration algorithm based on region segmentation
CN110211129B (en) * 2019-05-17 2021-05-11 西安财经学院 Low-coverage point cloud registration algorithm based on region segmentation
CN111177290A (en) * 2019-12-18 2020-05-19 浙江欣奕华智能科技有限公司 Method and device for evaluating accuracy of three-dimensional map
CN111177290B (en) * 2019-12-18 2023-06-02 浙江欣奕华智能科技有限公司 Evaluation method and device for accuracy of three-dimensional map
CN111428565A (en) * 2020-02-25 2020-07-17 北京理工大学 Point cloud identification point positioning method and device based on deep learning
CN111428565B (en) * 2020-02-25 2023-11-14 北京理工大学 Point cloud identification point positioning method and device based on deep learning
CN111460937A (en) * 2020-03-19 2020-07-28 深圳市新镜介网络有限公司 Face feature point positioning method and device, terminal equipment and storage medium
CN111460937B (en) * 2020-03-19 2023-12-19 深圳市新镜介网络有限公司 Facial feature point positioning method and device, terminal equipment and storage medium
CN112101229B (en) * 2020-09-16 2023-02-24 云南师范大学 Point cloud data feature point extraction method and device, computer equipment and storage medium
CN112101229A (en) * 2020-09-16 2020-12-18 云南师范大学 Point cloud data feature point extraction method and device, computer equipment and storage medium
CN112435166A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Unmanned aerial vehicle flight strip splicing method and equipment, storage medium and processing device

Similar Documents

Publication Publication Date Title
CN106874850A (en) One kind is based on three-dimensional face point cloud characteristic point positioning method
WO2017219391A1 (en) Face recognition system based on three-dimensional data
CN104091162B (en) The three-dimensional face identification method of distinguished point based
CN109408653B (en) Human body hairstyle generation method based on multi-feature retrieval and deformation
Hill et al. Model-based interpretation of 3d medical images.
CN105447441B (en) Face authentication method and device
CN102592136B (en) Three-dimensional human face recognition method based on intermediate frequency information in geometry image
Tang et al. Curvature-augmented tensor voting for shape inference from noisy 3d data
CN103632129A (en) Facial feature point positioning method and device
CN101650777B (en) Corresponding three-dimensional face recognition method based on dense point
Vezzetti et al. Geometrical descriptors for human face morphological analysis and recognition
Yoshizawa et al. Fast, robust, and faithful methods for detecting crest lines on meshes
CN106682575A (en) Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
Cai et al. Accurate eye center localization via hierarchical adaptive convolution
CN110544310A (en) feature analysis method of three-dimensional point cloud under hyperbolic conformal mapping
Fan et al. 3D facial landmark localization using texture regression via conformal mapping
CN109886091B (en) Three-dimensional facial expression recognition method based on weighted local rotation mode
Lengagne et al. From 2D images to 3D face geometry
CN106778491A (en) The acquisition methods and equipment of face 3D characteristic informations
CN116884045B (en) Identity recognition method, identity recognition device, computer equipment and storage medium
CN109074471B (en) Iris region segmentation method and device based on active appearance model
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach
CN108694348B (en) Tracking registration method and device based on natural features
Zheng et al. Skull similarity comparison based on SPCA
Guerrero et al. Landmark localisation in brain MR images using feature point descriptors based on 3D local self-similarities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620