CN106407985A - Three-dimensional human head point cloud feature extraction method and device thereof - Google Patents

Three-dimensional human head point cloud feature extraction method and device thereof Download PDF

Info

Publication number
CN106407985A
CN106407985A CN201610741174.2A CN201610741174A CN106407985A CN 106407985 A CN106407985 A CN 106407985A CN 201610741174 A CN201610741174 A CN 201610741174A CN 106407985 A CN106407985 A CN 106407985A
Authority
CN
China
Prior art keywords
point
value
normal
fpfh
human head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610741174.2A
Other languages
Chinese (zh)
Other versions
CN106407985B (en
Inventor
聂建华
刘小楠
刘昌进
李林
甘彤
郑根贤
强辉
王年松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 38 Research Institute
Original Assignee
CETC 38 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 38 Research Institute filed Critical CETC 38 Research Institute
Priority to CN201610741174.2A priority Critical patent/CN106407985B/en
Publication of CN106407985A publication Critical patent/CN106407985A/en
Application granted granted Critical
Publication of CN106407985B publication Critical patent/CN106407985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a three-dimensional human head point cloud feature extraction method and a device thereof. The method comprises steps that 1, point cloud data pre-processing, sampling, outlier removal and noise removal are included; 2, training stage, a normal and an FPFH (fast point feature histogram) value of each point are calculated, and a K-dimensional tree is established; 3, FPFH values of three typical feature portions are segmented and extracted from a given model; 4, searching is carried on the K-dimensional tree established in the step 2, and a candidate set is searched; and 5, a principal curvature of each point of the candidate set and a shape response factor are calculated, a feature point is finally determined according to profiles of the three typical features and the position of a nose tip. Through the method, sensitivity to illumination and angles during two-dimensional feature extraction in the prior art is overcome, training of the large-scale data is not required, and rapid and accurate extraction of the three typical human head features is realized, and typical features of the nose tip, a left auditory meatus and a right auditory meatus can be extracted on the basis of the three-dimensional human head point cloud data.

Description

A kind of three-dimensional human head point cloud feature extracting method and its device
Technical field
The present invention relates to the field such as image procossing, computer graphics, computer vision, particularly to a kind of based on quick Feature histogram value carries out the high measurement accuracy of signature analysis extraction to point, the three-dimensional human head point cloud feature of high measurement speed carries Take method.
Background technology
In recent years, computer vision field is developed rapidly, in industrial robot, pilotless automobile, medical image Analysis and topology model construction, virtual reality and the field such as real enhancing, artificial intelligence are all applied widely.Answer many above With in, face features extract all it is critical that a step, such as recognition of face, Attitude estimation, modeling, tracking etc..
Front portrait is both for based on two dimensional image human Facial Image Recognition Algorithm, when the luffing angle of two dimensional image, illumination, appearance During the change such as state, cosmetic, age, the performance of two-dimension human face recognizer can be significantly reduced, situation about can not identify often occurs. In recent years, with scientific and technological development, obtain three-dimensional data and increasingly there is feasibility, simultaneously as three-dimensional face is to illumination and appearance The advantages of state is insensitive, three-dimensional face information starts to be paid close attention to by increasing researcher.
Popular with three-dimensional face treatment technology, feature extraction as the committed step in these technology, its importance Also increasingly highlight.In human body head tissue, the change such as nose and ear are subject to express one's feelings, make up, age is affected minimum, three-dimensional simultaneously Geological information is but prominent and abundant, but there is presently no a kind of maturation can quick and precisely extract human nose and Er Piece three-dimensional measurement technology.
Content of the invention
Present invention aim at provide a kind of based on a feature histogram value carry out signature analysis extraction high measurement accuracy, The three-dimensional human head point cloud feature extracting method of high measurement speed and its device.Instant invention overcomes conventional two-dimensional feature extraction Sensitivity to illumination, angle, and do not need to train mass data it is achieved that to three characteristic features of human body head quick and precisely Extract.The typical case such as tip of the nose, left side duct, right side duct can be extracted using the present invention special to three-dimensional human head cloud data Levy.
To achieve the above object, the present invention employs the following technical solutions realization:A kind of three-dimensional human head point cloud is special Levy extracting method, it includes:
Step one, data preprocessing:
Step 2, training stage:Calculate the normal of each point in pretreated cloud data, and then calculate FPFH value, And the data that K ties up tree construction is set up by the quick storehouse of approximate KNN based on FPFH value, K is pretreated cloud data midpoint Number;
Step 3, manually select characteristic feature point from setting models, setting models can select the popular face of given data Type, through setting and same normal in step 2 and FPFH value calculating parameter, is calculated the FPFH value of characteristic feature point;
Step 4, inquiry phase:Chi-square value in statistics represents the extent of deviation of observation and theoretical value, will be from given Extract the characteristic feature point obtaining in model as theoretical value, the model K to be extracted training is tieed up point in tree construction as Observation, after calculating chi-square value one by one, by setting certain threshold value, you can obtain the candidate being made up of several similitudes Collection;
Step 5, determine characteristic point based on shape response factor, calculate the principal curvatures of each of Candidate Set point, Jin Erji Calculate shape response factor.
The present invention also provides a kind of three-dimensional human head point cloud feature deriving means, and it applies above-mentioned three-dimensional human head point Cloud feature extracting method, this device includes:
Pretreatment module, it is used for the pre- process of cloud data;
Training module, it is used for calculating the normal of each point in pretreated cloud data, and then calculates FPFH value, and The data that K ties up tree construction is set up by the quick storehouse of approximate KNN based on FPFH value, K is pretreated cloud data midpoint Number;
The FPFH value computing module of characteristic feature point, it is used for manually selecting characteristic feature point from setting models, gives Model can select the popular shape of face of given data, through setting and same normal in step 2 and FPFH value calculating parameter, meter Calculate the FPFH value obtaining characteristic feature point;
Candidate generation module, the chi-square value that it is used in statistics represents the extent of deviation of observation and theoretical value, will Extract the characteristic feature point obtaining as theoretical value from setting models, by the model K dimension tree construction to be extracted training Point as observation, after calculating chi-square value one by one, by setting certain threshold value, you can obtain being made up of several similitudes Candidate Set;
Characteristic point determining module, it is used for determining characteristic point based on shape response factor, calculates each of Candidate Set point Principal curvatures, and then calculate shape response factor.
Compared with prior art, the beneficial effects of the present invention is:
1st, propose a kind of brand-new three-dimensional human head point cloud feature extracting method, be nose, ear positioning, identification with Feature extraction is laid a good foundation;
2nd, the characteristic point in the unknown human head model of extract real-time can be realized by using FPFH value, improve three-dimensional The speed of measurement;
3rd, the restrictive condition such as the shape response factor by using three-dimension curved surface the angle direction with reference to normal apposition, real Now it is accurately positioned nose and auriculare, and distinguishes left and right auriculare, improve the precision of positioning, it is achieved thereby that 3 D human body head The high speed of portion's point cloud, high accuracy feature extraction.
Brief description
Fig. 1 is a kind of algorithm performs flow chart of present invention three-dimensional human head point cloud feature extracting method.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, below in conjunction with drawings and Examples, right The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only in order to explain the present invention, and It is not used in the restriction present invention.
The three-dimensional human head point cloud feature extracting method of the present invention may be designed to three-dimensional human head point cloud feature extraction Device is applied, and such as becomes App to promote in the electronic device by Software for Design and applies.Three-dimensional human head point cloud feature carries Take device mainly include pretreatment module, training module, the FPFH value computing module of characteristic feature point, candidate generation module, Characteristic point determining module.The three-dimensional human head point cloud feature extracting method of the present invention mainly includes five steps.
Step one, data preprocessing.
This step is executed by pretreatment module.Initial data possible data amount is larger, needs to carry out down-sampling to accelerate to locate Reason speed;Noise data needs to remove;The outlier causing such as blocking due to shooting angle needs to remove.
Data preprocessing divides three step execution:First, using in each voxel of this cloud data center of gravity a little Other points in voxel are approximately replaced to carry out down-sampling, thus reducing the point cloud density of this cloud data;Secondly, set necessarily Span, delete the noise data of below neck in this cloud data;Finally, by adjacent in the range of given search radius The method that near point quantity is less than given threshold value, by the point deletion that peels off in this cloud data.
Down-sampling described in step one refers to using institute's center of gravity a little in each voxel (i.e. three-dimensional cube) Lai closely Like other points replacing in voxel.Remove noise data to refer to, by setting certain span, delete neck in scanning element cloud Portion's data below.Remove outlier to refer to if point neighbor point quantity in the range of given search radius is less than given threshold Value, then be judged to outlier and be deleted.
Step 2, training stage:Calculate the normal of each point in pretreated cloud data, and then calculate FPFH value, And the data that K ties up tree construction is set up by the quick storehouse of approximate KNN based on FPFH value, K is pretreated cloud data midpoint Number.
This step is executed by training module.Calculate the normal of each point and then calculate FPFH value, and passed through based on FPFH near Set up K Wei Shu like the quick storehouse of arest neighbors (FLANN).
Calculate the normal of each point in pretreated cloud data:It is equal to every in the pretreated cloud data of calculating The section normal of point, is converted to and the covariance matrix C in formula (1) is decomposed, corresponding covariance matrix C minimal eigenvalue Characteristic vector as the normal of every in pretreated cloud data;All normals are made by formula (2)Consistent direction regards Point direction.Wherein, PiIt is i-th point in pretreated cloud data of data, k is PiThe number of neighbor point,It is PiAll The three-dimensional barycenter of neighbor point, vpFor viewpoint.
After obtaining normal, FPFH value is calculated by below scheme.
(1), for each of pretreated cloud data query point Pt, calculate this by formula (3), (4), (5) Query point PtWith its neighborhood point PsBetween uvw coordinate system;
U=nsFormula (3)
W=u × v formula (5)
Wherein, nsFor query point PtNeighborhood point PsNormal.
(2) formula (6), (7), (8) calculating query point P, are passed throughtNormal ntWith neighborhood point PsNormal nsBetween one group Misalignment angle α,θ, this result is referred to as the point feature histogram SPFH simplifying;
α=v ntFormula (6)
θ=arctan (w nt,u·nt) formula (8)
Wherein, d is query point PtTo neighborhood point PsBetween air line distance.
(3), redefine the k neighborhood of each point, using neighbor point PkSPFH value, query point P is determined by formula (9)t FPFH value FPFH (Pt);
Wherein, after obtaining the FPFH value of each point in pretreated cloud data, set up K Wei Shu by using FLANN The data of structure.
Step 3, manually select characteristic feature point from setting models, setting models can select the popular face of given data Type, through setting and same normal in step 2 and FPFH value calculating parameter, is calculated the FPFH value of characteristic feature point.
This step is executed by the FPFH value computing module of characteristic feature point.In the present embodiment, characteristic feature point includes nose Son point characteristic feature point, left side duct characteristic feature point, right side duct characteristic feature point.Nose is manually selected from setting models Point, left side duct, three characteristic feature points of right side duct, after setting and two above step identical calculating parameter, are somebody's turn to do FPFH value at point, can select popular shape of face as setting models according to different ethnic groups.That is, it is given in step 3 Model refers to select the popular shape of face of given data, manually selects tip of the nose, left side duct, three typical cases of right side duct by people After characteristic point, through setting and same normal in step 2 and FPFH value (i.e. FPFH value) calculating parameter, it is calculated three The FPFH value of characteristic feature point.
Step 4, inquiry phase:Chi-square value in statistics represents the extent of deviation of observation and theoretical value, will be from given Extract the characteristic feature point obtaining in model as theoretical value, the model K to be extracted training is tieed up point in tree construction as Observation, after calculating chi-square value one by one, by setting certain threshold value, you can obtain the candidate being made up of several similitudes Collection.
This step is executed by candidate generation module.From the model K dimension tree to be extracted training, by calculating chi-square value Obtain the Candidate Set being made up of several similitudes.Chi-square value described in step 4 represents the deviation journey of observation and theoretical value Degree, is calculated by formula (10), (11).The characteristic feature point obtaining will be extracted as theoretical value Q from setting modelsi, will The model K to be extracted training ties up the point P in tree constructioniAs observation, after calculating chi-square value one by one, certain by setting Threshold value, you can obtain the Candidate Set being made up of several similitudes.EiIt is corresponding PiDesired value.
Step 5, determine characteristic point based on shape response factor, calculate the principal curvatures of each of Candidate Set point, Jin Erji Calculate shape response factor.
This step is executed by characteristic point determining module.Tip of the nose characteristic feature point finally configures nose Candidate Set, left side ear Road characteristic feature point, right side duct characteristic feature point finally configure left and right sides duct Candidate Set;Setting characteristic:In each Candidate Set A corresponding curved surface, curved surface is got over and is protruded, and at tip of the nose, then shape response factor is bigger;Curved surface is got over and is recessed, similar to bowl Point in shape or duct, then shape response factor is less;According to this characteristic, in nose Candidate Set, selected shape response factor is maximum Point as prenasale, in left and right sides duct Candidate Set selected shape response factor minimum o'clock as two auriculares, lead to Cross the direction calculating normal apposition at normal and prenasale at auriculare, distinguish left and right auriculare.
The normal of left and right auriculare respectively be located at prenasale normal the left and right sides, by calculate auriculare at normal with The direction of normal apposition at prenasale, using the right-hand rule, thumb is downwards right ear canal point, and thumb is left auriculare upwards.
A corresponding curved surface in each Candidate Set, shape response factor is calculated by following steps:
1. calculate Gaussian curvature k at a discrete point on curved surfaceHWith average curvature kG
2. pass through formula (12), (13) calculate principal curvatures k at this discrete point1And k2, wherein k1≥k2
3. the shape response factor S of this discrete point is calculated by formula (14);
A kind of three-dimensional human head point cloud feature extracting method of the 1 couple of present invention is described further below in conjunction with the accompanying drawings.
First step of the three-dimensional human head point cloud feature extracting method of the present embodiment is data preprocessing, point Three step execution.Firstly, because original data volume can ratio larger, so using in each voxel (i.e. three-dimensional cube) a little Center of gravity approximately to replace other points in voxel to carry out down-sampling, thus reducing a cloud density.Secondly, initial data may comprise Below neck and other cloud datas, so needing to set certain span, delete below neck in scanning element cloud and its He such as disturbs at the noise data.Finally, due to the outlier that the factor of the precision of point cloud acquisition equipment itself and noise causes, need The method being less than given threshold value by neighbor point quantity in the range of given search radius, will peel off point deletion.
The second step of the three-dimensional human head point cloud feature extracting method of the present embodiment is training data, and point two steps are held OK.First, calculate normal, be equal to the section normal calculating every in point cloud, be converted to this problem to formula (1) further In covariance matrix C decomposed, the characteristic vector of corresponding C minimal eigenvalue can be used as every P in a cloudiNormal, its Middle k is PiThe number of neighbor point,It is the three-dimensional barycenter of all neighbor points.All normals are made by formula (2)Unanimously towards viewpoint Direction, wherein vpFor viewpoint.
After obtaining normal, FPFH value is calculated by below scheme:
(1) for each query point Pt, calculate this point and its neighborhood point P by formula (3), (4), (5)sBetween uvw Coordinate system;
(2) formula (6), (7), (8) calculating P are passed throughtNormal ntWith PsNormal nsBetween one group of misalignment angle α,θ, This result is referred to as SPFH (the point feature histogram of simplification);
(3) redefine the k neighborhood of each point, using neighbor point PkSPFH value, P is determined by formula (9)tFPFH.
U=nsFormula (3)
W=u × v formula (5)
α=v ntFormula (6)
θ=arctan (w nt,u·nt) formula (8)
After obtaining the FPFH value of each point in cloud data, set up, by using FLANN, the data that K ties up tree construction.
3rd step of the three-dimensional human head point cloud feature extracting method of the present embodiment is manual from setting models Select tip of the nose, left side duct, three characteristic feature points of right side duct.Setting models can select the popular face of given data Type, through setting and same normal in step 2 and FPFH value calculating parameter, is calculated the FPFH of three characteristic feature points Value.
4th step of the three-dimensional human head point cloud feature extracting method of the present embodiment is inquiry data, in statistics Chi-square value represent the extent of deviation of observation and theoretical value, be calculated by formula (10), (11), thus using chi-square value Complete this step.The characteristic feature point obtaining will be extracted as theoretical value Q from setting modelsi, by the mould to be extracted training The point P in tree construction tieed up by type KiAs observation, after calculating chi-square value one by one, by setting certain threshold value, you can obtain by The Candidate Set that several similitudes are constituted.
5th step of the three-dimensional human head point cloud feature extracting method of the present embodiment is by shape response factor Determine final prenasale, final left and right auriculare is determined by the apposition direction of prenasale normal and auriculare normal.Its Middle shape response factor is calculated by below scheme:
(1) calculate Gaussian curvature k at a discrete point on curved surfaceHWith average curvature kG
(2) pass through formula (12), (13) calculate principal curvatures k at this point1And k2, wherein k1≥k2
(3) the shape response factor S of this point is calculated by formula (14).
Curved surface is got over and is protruded, and at tip of the nose, then shape response factor is bigger;Curved surface is got over and is recessed, similar to bowl-shape or Point in duct, then shape response factor is less.According to this characteristic, the maximum point of selected shape response factor in nose Candidate Set As prenasale, in the duct Candidate Set of left and right, the minimum point of selected shape response factor is as auriculare.Because left and right duct The normal of point is located at the left and right sides of prenasale normal respectively, so outside by normal at normal at calculating auriculare and prenasale Long-pending direction, using the right-hand rule, thumb is downwards right ear canal point, and thumb is left auriculare upwards.
Above content is to describe in detail it is impossible to assert the present invention with reference to specific preferred embodiment is made for the present invention It is embodied as being only limitted to these explanations.For those skilled in the art, without departing from structure of the present invention On the premise of think of, some simple deduction or replace can also be made, all should be considered as belonging to the present invention will by the right submitted to Seek the invention protection domain that book determines.

Claims (10)

1. a kind of three-dimensional human head point cloud feature extracting method it is characterised in that:It includes:
Step one, data preprocessing:
Step 2, training stage:Calculate the normal of each point in pretreated cloud data, and then calculate FPFH value, and base In FPFH value, the data that K ties up tree construction is set up by the quick storehouse of approximate KNN, K is the individual of pretreated cloud data midpoint Number;
Step 3, manually select characteristic feature point from setting models, setting models can select the popular shape of face of given data, warp Cross and set and same normal in step 2 and FPFH value calculating parameter, be calculated the FPFH value of characteristic feature point;
Step 4, inquiry phase:Chi-square value in statistics represents the extent of deviation of observation and theoretical value, will be from setting models The characteristic feature point that middle extraction obtains as theoretical value, the model K to be extracted training is tieed up point in tree construction as observation Value, after calculating chi-square value one by one, by setting certain threshold value, you can obtain the Candidate Set being made up of several similitudes;
Step 5, determine characteristic point based on shape response factor, calculate the principal curvatures of each of Candidate Set point, and then calculate shape Shape response factor.
2. three-dimensional human head point cloud feature extracting method according to claim 1 it is characterised in that:Cloud data is located in advance Reason point three step execution:First, using in each voxel of this cloud data center of gravity a little approximately replace in voxel its He puts and carries out down-sampling, thus reducing the point cloud density of this cloud data;Secondly, set certain span, delete this point Noise data below neck in cloud data;Finally, given threshold is less than by neighbor point quantity in the range of given search radius The method of value, by the point deletion that peels off in this cloud data.
3. three-dimensional human head point cloud feature extracting method according to claim 1 it is characterised in that:After calculating pretreatment Cloud data in each point normal:It is equal to the section normal calculating every in pretreated cloud data, conversion It is that the covariance matrix C in formula (1) is decomposed, the characteristic vector of corresponding covariance matrix C minimal eigenvalue is located as pre- The normal of every in cloud data after reason;All normals are made by formula (2)Unanimously towards viewpoint direction;Wherein, PiIt is pre- I-th point of data in cloud data after process, k is PiThe number of neighbor point,It is PiThe three-dimensional barycenter of all neighbor points, vp For viewpoint.
4. three-dimensional human head point cloud feature extracting method according to claim 3 it is characterised in that:After obtaining normal, FPFH value is calculated by below scheme:
First, for each of pretreated cloud data query point Pt, calculate this query point by formula (3), (4), (5) PtWith its neighborhood point PsBetween uvw coordinate system;
U=nsFormula (3)
W=u × v formula (5)
Wherein, nsFor query point PtNeighborhood point PsNormal;
2nd, formula (6), (7), (8) calculating query point P are passed throughtNormal ntWith neighborhood point PsNormal nsBetween one group of angle of deviation Degree α,θ, this result is referred to as the point feature histogram SPFH simplifying;
α=v ntFormula (6)
θ=arctan (w nt,u·nt) formula (8)
Wherein, d is query point PtTo neighborhood point PsBetween air line distance;
3rd, redefine the k neighborhood of each point, using neighbor point PkSPFH value, query point P is determined by formula (9)t's FPFH value FPFH (Pt);
Wherein, after obtaining the FPFH value of each point in pretreated cloud data, set up K dimension tree construction by using FLANN Data.
5. three-dimensional human head point cloud feature extracting method according to claim 1 it is characterised in that:Chi-square value x2Pass through Formula (10), (11) are calculated,
Wherein, QiFor theoretical value, PiModel K to be extracted for training ties up the point in tree construction, EiIt is corresponding PiDesired value.
6. three-dimensional human head point cloud feature extracting method according to claim 1 it is characterised in that:Characteristic feature point bag Include tip of the nose characteristic feature point, left side duct characteristic feature point, right side duct characteristic feature point.
7. three-dimensional human head point cloud feature extracting method according to claim 6 it is characterised in that:Tip of the nose typical case is special Levy a little final configuration nose Candidate Set, left side duct characteristic feature point, right side duct characteristic feature point finally configure left and right and pick up the ears Road Candidate Set;Setting characteristic:A corresponding curved surface in each Candidate Set, curved surface is got over and is protruded, and at tip of the nose, then shape is rung Answer the factor bigger;Curved surface is got over and is recessed, and similar to point in bowl-shape or duct, then shape response factor is less;According to this characteristic, in nose In sharp Candidate Set, as prenasale, in left and right sides duct Candidate Set, selected shape responds the maximum point of selected shape response factor The factor minimum o'clock as two auriculares, by calculating the direction of normal apposition at normal and prenasale at auriculare, distinguish Left and right auriculare.
8. three-dimensional human head point cloud feature extracting method according to claim 7 it is characterised in that:Left and right auriculare Normal be located at the left and right sides of prenasale normal respectively, by calculating the side of normal apposition at normal and prenasale at auriculare To using the right-hand rule, thumb is downwards right ear canal point, and thumb is left auriculare upwards.
9. three-dimensional human head point cloud feature extracting method according to claim 1 it is characterised in that:In each Candidate Set A corresponding curved surface, shape response factor is calculated by following steps:
1. calculate Gaussian curvature k at a discrete point on curved surfaceHWith average curvature kG
2. pass through formula (12), (13) calculate principal curvatures k at this discrete point1And k2, wherein k1≥k2
3. the shape response factor S of this discrete point is calculated by formula (14);
10. a kind of three-dimensional human head point cloud feature deriving means, its application is according to any one in claim 1 to 9 Three-dimensional human head point cloud feature extracting method it is characterised in that:This device includes:
Pretreatment module, it is used for the pretreatment of cloud data;
Training module, it is used for calculating the normal of each point in pretreated cloud data, and then calculates FPFH value, and is based on FPFH value sets up, by the quick storehouse of approximate KNN, the data that K ties up tree construction, and K is the individual of pretreated cloud data midpoint Number;
The FPFH value computing module of characteristic feature point, it is used for manually selecting characteristic feature point, setting models from setting models The popular shape of face of given data can be selected, through setting and same normal in step 2 and FPFH value calculating parameter, calculate FPFH value to characteristic feature point;
Candidate generation module, the chi-square value that it is used in statistics represents the extent of deviation of observation and theoretical value, will be from giving The characteristic feature point obtaining is extracted as theoretical value, the point model K to be extracted training tieed up in tree construction is made in cover half type For observation, after calculating chi-square value one by one, by setting certain threshold value, you can obtain the candidate being made up of several similitudes Collection;
Characteristic point determining module, it is used for determining characteristic point based on shape response factor, calculates the master of each of Candidate Set point Curvature, and then calculate shape response factor.
CN201610741174.2A 2016-08-26 2016-08-26 A kind of three-dimensional human head point cloud feature extracting method and its device Active CN106407985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610741174.2A CN106407985B (en) 2016-08-26 2016-08-26 A kind of three-dimensional human head point cloud feature extracting method and its device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610741174.2A CN106407985B (en) 2016-08-26 2016-08-26 A kind of three-dimensional human head point cloud feature extracting method and its device

Publications (2)

Publication Number Publication Date
CN106407985A true CN106407985A (en) 2017-02-15
CN106407985B CN106407985B (en) 2019-09-10

Family

ID=58003046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610741174.2A Active CN106407985B (en) 2016-08-26 2016-08-26 A kind of three-dimensional human head point cloud feature extracting method and its device

Country Status (1)

Country Link
CN (1) CN106407985B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980878A (en) * 2017-03-29 2017-07-25 深圳大学 The determination method and device of three-dimensional model geometric style
CN108154525A (en) * 2017-11-21 2018-06-12 四川大学 A kind of matched bone fragments joining method of feature based
CN108428219A (en) * 2018-02-28 2018-08-21 华南农业大学 A kind of log diameter measuring method based on three-dimension curved surface
CN109818924A (en) * 2018-12-21 2019-05-28 深圳科安达电子科技股份有限公司 A kind of device of the login railway dedicated system based on recognition of face
CN110458174A (en) * 2019-06-28 2019-11-15 南京航空航天大学 A kind of unordered accurate extracting method of cloud key feature points
CN110633749A (en) * 2019-09-16 2019-12-31 无锡信捷电气股份有限公司 Three-dimensional point cloud identification method based on improved viewpoint feature histogram
CN111061360A (en) * 2019-11-12 2020-04-24 北京字节跳动网络技术有限公司 Control method, device, medium and electronic equipment based on head action of user
CN111145166A (en) * 2019-12-31 2020-05-12 北京深测科技有限公司 Safety monitoring method and system
CN112418030A (en) * 2020-11-11 2021-02-26 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011085435A1 (en) * 2010-01-14 2011-07-21 The University Of Sydney Classification process for an extracted object or terrain feature
CN102622776A (en) * 2011-01-31 2012-08-01 微软公司 Three-dimensional environment reconstruction
CN102779358A (en) * 2011-05-11 2012-11-14 达索系统公司 Method for designing a geometrical three-dimensional modeled object
CN103430218A (en) * 2011-03-21 2013-12-04 英特尔公司 Method of augmented makeover with 3d face modeling and landmark alignment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011085435A1 (en) * 2010-01-14 2011-07-21 The University Of Sydney Classification process for an extracted object or terrain feature
CN102622776A (en) * 2011-01-31 2012-08-01 微软公司 Three-dimensional environment reconstruction
CN103430218A (en) * 2011-03-21 2013-12-04 英特尔公司 Method of augmented makeover with 3d face modeling and landmark alignment
CN102779358A (en) * 2011-05-11 2012-11-14 达索系统公司 Method for designing a geometrical three-dimensional modeled object

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980878B (en) * 2017-03-29 2020-05-19 深圳大学 Method and device for determining geometric style of three-dimensional model
CN106980878A (en) * 2017-03-29 2017-07-25 深圳大学 The determination method and device of three-dimensional model geometric style
CN108154525A (en) * 2017-11-21 2018-06-12 四川大学 A kind of matched bone fragments joining method of feature based
CN108428219A (en) * 2018-02-28 2018-08-21 华南农业大学 A kind of log diameter measuring method based on three-dimension curved surface
CN108428219B (en) * 2018-02-28 2021-08-31 华南农业大学 Log diameter measuring and calculating method based on three-dimensional curved surface
CN109818924A (en) * 2018-12-21 2019-05-28 深圳科安达电子科技股份有限公司 A kind of device of the login railway dedicated system based on recognition of face
CN110458174A (en) * 2019-06-28 2019-11-15 南京航空航天大学 A kind of unordered accurate extracting method of cloud key feature points
CN110633749A (en) * 2019-09-16 2019-12-31 无锡信捷电气股份有限公司 Three-dimensional point cloud identification method based on improved viewpoint feature histogram
CN110633749B (en) * 2019-09-16 2023-05-02 无锡信捷电气股份有限公司 Three-dimensional point cloud identification method based on improved viewpoint feature histogram
CN111061360A (en) * 2019-11-12 2020-04-24 北京字节跳动网络技术有限公司 Control method, device, medium and electronic equipment based on head action of user
CN111061360B (en) * 2019-11-12 2023-08-22 北京字节跳动网络技术有限公司 Control method and device based on user head motion, medium and electronic equipment
CN111145166A (en) * 2019-12-31 2020-05-12 北京深测科技有限公司 Safety monitoring method and system
CN111145166B (en) * 2019-12-31 2023-09-01 北京深测科技有限公司 Security monitoring method and system
CN112418030A (en) * 2020-11-11 2021-02-26 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates
CN112418030B (en) * 2020-11-11 2022-05-13 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium
CN116563561B (en) * 2023-07-06 2023-11-14 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium

Also Published As

Publication number Publication date
CN106407985B (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN106407985A (en) Three-dimensional human head point cloud feature extraction method and device thereof
Zhang et al. Chinese sign language recognition with adaptive HMM
WO2018107979A1 (en) Multi-pose human face feature point detection method based on cascade regression
CN103246891B (en) A kind of Chinese Sign Language recognition methods based on Kinect
CN110705478A (en) Face tracking method, device, equipment and storage medium
CN107748890A (en) A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN103455794B (en) A kind of dynamic gesture identification method based on frame integration technology
CN102262724A (en) Object image characteristic points positioning method and object image characteristic points positioning system
Li et al. 3D object recognition and pose estimation from point cloud using stably observed point pair feature
CN109101864A (en) The upper half of human body action identification method returned based on key frame and random forest
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN106599785A (en) Method and device for building human body 3D feature identity information database
Wu et al. An intelligent interactive system based on hand gesture recognition algorithm and kinect
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN104732247B (en) A kind of human face characteristic positioning method
CN112749646A (en) Interactive point-reading system based on gesture recognition
Xu et al. Robust hand gesture recognition based on RGB-D Data for natural human–computer interaction
CN112017188A (en) Space non-cooperative target semantic identification and reconstruction method
CN106650628A (en) Fingertip detection method based on three-dimensional K curvature
CN107346207A (en) A kind of dynamic gesture cutting recognition methods based on HMM
CN108256440A (en) A kind of eyebrow image segmentation method and system
CN109784241B (en) Stable palm print image feature enrichment area extraction method
CN109886091A (en) Three-dimensional face expression recognition methods based on Weight part curl mode
CN114120389A (en) Network training and video frame processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant