CN106407985B - A kind of three-dimensional human head point cloud feature extracting method and its device - Google Patents
A kind of three-dimensional human head point cloud feature extracting method and its device Download PDFInfo
- Publication number
- CN106407985B CN106407985B CN201610741174.2A CN201610741174A CN106407985B CN 106407985 B CN106407985 B CN 106407985B CN 201610741174 A CN201610741174 A CN 201610741174A CN 106407985 B CN106407985 B CN 106407985B
- Authority
- CN
- China
- Prior art keywords
- point
- value
- point cloud
- normal
- fpfh
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of three-dimensional human head point cloud feature extracting method and its devices.This method is in five steps: 1, data preprocessing, including down-sampling, removal outlier, removal noise;2, the training stage calculates normal, FPFH (quick point feature histogram) value of each point, establishes K Wei Shu;3, the FPFH value from three characteristic features is extracted from segmentation in setting models;4, it is scanned in the K dimension tree being established above, searches Candidate Set;5, the principal curvatures and shape response factor for calculating each point in Candidate Set, according to the shape of three characteristic features and with the position of nose, finally determine characteristic point.The present invention overcomes conventional two-dimensional feature extractions to the sensitivity of illumination, angle, and does not need to train mass data, realizes the quick and precisely extraction to three characteristic features of human body head.Using the present invention characteristic features such as tip of the nose, left side ear canal, right side ear canal can be extracted to three-dimensional human head point cloud data.
Description
Technical field
The present invention relates to the fields such as image procossing, computer graphics, computer vision, in particular to a kind of based on quick
Point feature histogram value carries out the high measurement accuracy of signature analysis extraction, the three-dimensional human head point cloud feature of high measurement speed mentions
Take method.
Background technique
In recent years, computer vision field is developed rapidly, in industrial robot, pilotless automobile, medical image
The fields such as analysis and topology model construction, virtual reality and real enhancing, artificial intelligence are applied widely.It is answered many above
In, face features extraction is all vital a step, such as recognition of face, Attitude estimation, modeling, tracking etc..
Based on two dimensional image human Facial Image Recognition Algorithm both for front portrait, when the pitch angle of two dimensional image, illumination, appearance
When state, makeup, age etc. change, the performance of two-dimension human face identification recognizer can be significantly reduced, the case where cannot identifying often occurs.
In recent years, with the development of science and technology, obtaining three-dimensional data increasingly has feasibility, simultaneously as three-dimensional face is to illumination and appearance
The advantages that state is insensitive, three-dimensional face information start to be paid close attention to by more and more researchers.
With the prevalence of three-dimensional face processing technique, feature extraction is as the committed step in these technologies, importance
Also increasingly prominent.In human body head tissue, nose and ear are influenced minimum by variations such as expression, makeup, ages, while three-dimensional
Geological information is but the most prominent and abundant, but can quick and precisely extract human nose and Er there is presently no a kind of maturation
Piece three-dimensional measurement technology.
Summary of the invention
It is an object of that present invention to provide it is a kind of based on point feature histogram value carry out signature analysis extraction high measurement accuracy,
The three-dimensional human head point cloud feature extracting method and its device of high measurement speed.The present invention overcomes conventional two-dimensional feature extractions
It to the sensitivity of illumination, angle, and does not need to train mass data, realize to three characteristic features of human body head quick and precisely
It extracts.It is typical special to extract tip of the nose, left side ear canal, right side ear canal etc. to three-dimensional human head point cloud data using the present invention
Sign.
To achieve the above object, the present invention is implemented with the following technical solutions: a kind of three-dimensional human head point Yun Te
Levy extracting method comprising:
Step 1: data preprocessing: firstly, using the center of gravity of all the points in each voxel of the point cloud data come close
Like replacing other points in voxel to carry out down-sampling, to reduce the point Yun Midu of the point cloud data;It is taken secondly, setting is certain
It is worth range, deletes neck noise data below in the point cloud data;Finally, passing through the neighbor point within the scope of given search radius
Quantity is less than the method for given threshold value, by the point deletion that peels off in the point cloud data;
Step 2: the training stage: calculating the normal of each point in pretreated point cloud data: being equal to calculating pretreatment
Every tangent plane normal in point cloud data afterwards, is converted to and decomposes to the covariance matrix C in formula (1), corresponding association side
The feature vector of poor Matrix C minimal eigenvalue is as every in pretreated point cloud data normal;Make to own by formula (2)
NormalUnanimously towards viewpoint direction;Wherein, PiIt is i-th point in pretreated point cloud data of data, k is PiNeighbor point
Number,It is PiThe three-dimensional mass center of all neighbor points, vpFor viewpoint.
After obtaining normal, FPFH value is calculated by following below scheme:
One, for each query point P in pretreated point cloud datat, calculate this by formula (3), (4), (5) and look into
Ask point PtWith its neighborhood point PsBetween uvw coordinate system;
U=nsFormula (3)
W=u × v formula (5)
Wherein, nsFor query point PtNeighborhood point PsNormal, two, by formula (6), (7), (8) calculate query point PtMethod
Line ntWith neighborhood point PsNormal nsBetween one group of misalignment angle α,θ, this result are known as simplified point feature histogram
SPFH;
α=vntFormula (6)
θ=arctan(w·nt, unt) formula (8)
Wherein, d is query point PtTo neighborhood point PsBetween linear distance;
Three, the k neighborhood for redefining each point, uses neighbor point PkSPFH value, query point P is determined by formula (9)t
FPFH value FPFH (Pt);
Wherein, it obtains establishing K Wei Shu by using FLANN in pretreated point cloud data after the FPFH value of each point
The data of structure, K are the number at pretreated point cloud data midpoint;
Step 3: manually selecting characteristic feature point from setting models, setting models can select the public face of given data
The FPFH value of characteristic feature point is calculated by setting and normal same in step 2 and FPFH value calculating parameter in type;
Step 4: inquiry phase: chi-square value in statistics indicates the extent of deviation of observation and theoretical value, will be from given
The characteristic feature point extracted in model as theoretical value, using the point in trained model K dimension tree construction to be extracted as
After calculating chi-square value one by one, by setting certain threshold value, the candidate being made of several similitudes is can be obtained in observation
Collection;Chi-square value χ2It is calculated by formula (10), (11),
Wherein, QiFor theoretical value, PiThe point in tree construction, E are tieed up for trained model K to be extractediIt is corresponding PiExpectation
Value;
Step 5: determining characteristic point based on shape response factor, the principal curvatures of each point in Candidate Set, Jin Erji are calculated
Calculate shape response factor.
The present invention also provides a kind of three-dimensional human head point cloud feature deriving means, apply above-mentioned three-dimensional human head point
Cloud feature extracting method, the device include:
Preprocessing module is used for the pre- processing of point cloud data;
Training module is used to calculate the normal of each point in pretreated point cloud data, and then calculates FPFH value, and
The data that K ties up tree construction are established by the quick library of approximate KNN based on FPFH value, K is pretreated point cloud data midpoint
Number;
The FPFH value computing module of characteristic feature point, is used to manually select characteristic feature point from setting models, gives
Model can select the public shape of face of given data, by setting and normal same in step 2 and FPFH value calculating parameter, meter
Calculation obtains the FPFH value of characteristic feature point;
Candidate generation module, the chi-square value being used in statistics indicate the extent of deviation of observation and theoretical value, will
The characteristic feature point extracted from setting models, will be in trained model K dimension tree construction to be extracted as theoretical value
Point is used as observation, after calculating chi-square value one by one, by setting certain threshold value, can be obtained and is made of several similitudes
Candidate Set;
Characteristic point determining module is used to determine characteristic point based on shape response factor, calculates each point in Candidate Set
Principal curvatures, and then calculate shape response factor.
Compared with prior art, the beneficial effects of the present invention are:
1, propose a kind of completely new three-dimensional human head point cloud feature extracting method, be nose, ear's positioning, identification with
Feature extraction is laid a good foundation;
2, the characteristic point in the unknown human head model of extract real-time may be implemented by using FPFH value, improves three-dimensional
The speed of measurement;
3, real by using the restrictive conditions such as the shape response factor of three-dimension curved surface and the angle direction of combination normal apposition
It is now accurately positioned nose and auriculare, and distinguishes left and right auriculare, the precision of positioning is improved, to realize 3 D human body head
The high speed of portion's point cloud, high-precision feature extraction.
Detailed description of the invention
Fig. 1 is a kind of algorithm execution flow chart of three-dimensional human head point cloud feature extracting method of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Three-dimensional human head point cloud feature extracting method of the invention may be designed to three-dimensional human head point cloud feature extraction
Device is applied, and is such as promoted and is applied in the electronic device at App by software design.Three-dimensional human head point cloud feature mentions
Take device mainly include preprocessing module, training module, the FPFH value computing module of characteristic feature point, candidate generation module,
Characteristic point determining module.Three-dimensional human head point cloud feature extracting method of the invention mainly includes five steps.
Step 1: data preprocessing.
This step is executed by preprocessing module.The possible data volume of initial data is larger, needs to carry out down-sampling to accelerate to locate
Manage speed;Noise data needs to remove;The outlier as caused by blocking etc. shooting angle needs to remove.
Three steps of data preprocessing point execute: firstly, utilizing the center of gravity of all the points in each voxel of the point cloud data
Carry out approximate other points instead of in voxel and carry out down-sampling, to reduce the point Yun Midu of the point cloud data;Secondly, setting is certain
Value range, delete neck noise data below in the point cloud data;Finally, by adjacent within the scope of given search radius
Near point quantity is less than the method for given threshold value, by the point deletion that peels off in the point cloud data.
Down-sampling described in step 1 refers to using the center of gravity of each voxel (i.e. three-dimensional cube) interior all the points come close
Like other points replaced in voxel.Removal noise data refers to by setting certain value range, deletes neck in scanning element cloud
Portion's data below.Removal outlier refers to if point neighbor point quantity within the scope of given search radius is less than given threshold
Value, then be determined as outlier and be deleted.
Step 2: the training stage: the normal of each point in pretreated point cloud data is calculated, and then calculates FPFH value,
And the data that K ties up tree construction are established by the quick library of approximate KNN based on FPFH value, K is pretreated point cloud data midpoint
Number.
This step is executed by training module.It calculates the normal of each point and then calculates FPFH value, and passed through closely based on FPFH
K Wei Shu is established like the quick library of arest neighbors (FLANN).
It calculates the normal of each point in pretreated point cloud data: being equal to and calculate in pretreated point cloud data often
The tangent plane normal of point, is converted to and decomposes to the covariance matrix C in formula (1), corresponding covariance matrix C minimal eigenvalue
Feature vector as every in pretreated point cloud data normal;Make all normals by formula (2)Consistent direction view
Point direction.Wherein, PiIt is i-th point in pretreated point cloud data of data, k is PiThe number of neighbor point,It is PiIt is all
The three-dimensional mass center of neighbor point, vpFor viewpoint.
After obtaining normal, FPFH value is calculated by following below scheme.
(1), for each query point P in pretreated point cloud datat, this is calculated by formula (3), (4), (5)
Query point PtWith its neighborhood point PsBetween uvw coordinate system;
U=nsFormula (3)
W=u × v formula (5)
Wherein, nsFor query point PtNeighborhood point PsNormal.
(2), query point P is calculated by formula (6), (7), (8)tNormal ntWith neighborhood point PsNormal nsBetween one group
Misalignment angle α,θ, this result are known as simplified point feature histogram SPFH;
α=vntFormula (6)
θ=arctan (wnt, unt) formula (8)
Wherein, d is query point PtTo neighborhood point PsBetween linear distance.
(3), the k neighborhood for redefining each point, uses neighbor point PkSPFH value, query point P is determined by formula (9)t
FPFH value FPFH (Pt);
Wherein, it obtains establishing K Wei Shu by using FLANN in pretreated point cloud data after the FPFH value of each point
The data of structure.
Step 3: manually selecting characteristic feature point from setting models, setting models can select the public face of given data
The FPFH value of characteristic feature point is calculated by setting and normal same in step 2 and FPFH value calculating parameter in type.
This step is executed by the FPFH value computing module of characteristic feature point.In the present embodiment, characteristic feature point includes nose
Sub- point characteristic feature point, left side ear canal characteristic feature point, right side ear canal characteristic feature point.Nose is manually selected from setting models
Three point, left side ear canal, right side ear canal characteristic feature points are somebody's turn to do after setting calculating parameter identical with two above step
FPFH value at point can select public shape of face as setting models according to different ethnic groups.That is, given in step 3
Model refers to the public shape of face of selection given data, manually selects three tip of the nose, left side ear canal, right side ear canal typical cases by people
After characteristic point, by setting and normal same in step 2 and FPFH value (i.e. FPFH value) calculating parameter, it is calculated three
The FPFH value of characteristic feature point.
Step 4: inquiry phase: chi-square value in statistics indicates the extent of deviation of observation and theoretical value, will be from given
The characteristic feature point extracted in model as theoretical value, using the point in trained model K dimension tree construction to be extracted as
After calculating chi-square value one by one, by setting certain threshold value, the candidate being made of several similitudes is can be obtained in observation
Collection.
This step is executed by candidate generation module.From trained model K dimension tree to be extracted, by calculating chi-square value
Obtain the Candidate Set being made of several similitudes.The deviation journey of chi-square value described in step 4 expression observation and theoretical value
Degree, is calculated by formula (10), (11).Using the characteristic feature extracted from setting models point as theoretical value Qi, will
Point P in trained model K dimension tree construction to be extractediAs observation, after calculating chi-square value one by one, by setting centainly
The Candidate Set being made of several similitudes can be obtained in threshold value.EiIt is corresponding PiDesired value.
Step 5: determining characteristic point based on shape response factor, the principal curvatures of each point in Candidate Set, Jin Erji are calculated
Calculate shape response factor.
This step is executed by characteristic point determining module.Tip of the nose characteristic feature point finally configures nose Candidate Set, left side ear
Road characteristic feature point, right side ear canal characteristic feature point finally configure left and right sides ear canal Candidate Set;Characteristic is set: in each Candidate Set
A corresponding curved surface, curved surface more protrude, and are similar at tip of the nose, then shape response factor is bigger;Curved surface is more recessed, and is similar to bowl
Point in shape or ear canal, then shape response factor is smaller;According to this characteristic, selected shape response factor is maximum in nose Candidate Set
Point as prenasale, the smallest point of selected shape response factor is used as two auriculares in left and right sides ear canal Candidate Set, leads to
The direction for calculating normal and normal apposition at prenasale at auriculare is crossed, left and right auriculare is distinguished.
The normal of left and right auriculare is located at the left and right sides of prenasale normal, by calculate auriculare at normal with
The direction of normal apposition at prenasale, using the right-hand rule, thumb is right ear canal point downwards, and thumb is left auriculare upwards.
A corresponding curved surface, shape response factor are calculated by following steps in each Candidate Set:
1. calculating the Gaussian curvature k on curved surface at a discrete pointHWith average curvature kG;
2. calculating the principal curvatures k at the discrete point by formula (12), (13)1And k2, wherein k1≥k2;
3. calculating the shape response factor S of the discrete point by formula (14);
A kind of three-dimensional human head point cloud feature extracting method of 1 couple of present invention is described further with reference to the accompanying drawing.
First step of the three-dimensional human head point cloud feature extracting method of the present embodiment is data preprocessing, point
Three steps execute.Firstly, because original data volume can be bigger, so utilizing each voxel (i.e. three-dimensional cube) interior all the points
Center of gravity carrys out approximate other points instead of in voxel and carries out down-sampling, to reduce point Yun Midu.Secondly, initial data may include
Below neck and other point cloud datas, so need to set certain value range, delete in scanning element cloud below neck and its
He such as interferes at the noise datas.Finally, due to which putting cloud obtains outlier caused by the precision of equipment itself and the factor of noise, need
By within the scope of given search radius neighbor point quantity be less than the method for given threshold value, by the point deletion that peels off.
The second step of the three-dimensional human head point cloud feature extracting method of the present embodiment is training data, and point two steps are held
Row.Firstly, calculating normal, it is equal to every in calculating point cloud tangent plane normal, is further converted to the problem to formula (1)
In covariance matrix C decomposed, the feature vector of corresponding C minimal eigenvalue can be used as every P in a cloudiNormal,
Middle k is PiThe number of neighbor point,It is the three-dimensional mass center of all neighbor points.Make all normals by formula (2)Consistent direction view
Direction is put, wherein vpFor viewpoint.
After obtaining normal, FPFH value is calculated by following below scheme:
(1) for each query point Pt, this point and its neighborhood point P are calculated by formula (3), (4), (5)sBetween uvw
Coordinate system;
(2) P is calculated by formula (6), (7), (8)tNormal ntWith PsNormal nsBetween one group of misalignment angle α,θ,
This result is known as SPFH (simplified point feature histogram);
(3) the k neighborhood for redefining each point, uses neighbor point PkSPFH value, P is determined by formula (9)tFPFH.
U=nsFormula (3)
W=u × v formula (5)
α=vntFormula (6)
θ=arctan (wnt, unt) formula (8)
It obtains in point cloud data after the FPFH value of each point, the data of K dimension tree construction is established by using FLANN.
The third step of the three-dimensional human head point cloud feature extracting method of the present embodiment is manual from setting models
Select tip of the nose, three left side ear canal, right side ear canal characteristic feature points.Setting models can choose the public face of given data
The FPFH of three characteristic feature points is calculated by setting and normal same in step 2 and FPFH value calculating parameter in type
Value.
4th step of the three-dimensional human head point cloud feature extracting method of the present embodiment is to inquire data, in statistics
Chi-square value indicate observation and theoretical value extent of deviation, be calculated by formula (10), (11), thus using chi-square value come
Complete this step.Using the characteristic feature extracted from setting models point as theoretical value Qi, by trained mould to be extracted
Type K ties up the point P in tree constructioniAs observation, after calculating chi-square value one by one, by setting certain threshold value, can be obtained by
The Candidate Set that several similitudes are constituted.
5th step of the three-dimensional human head point cloud feature extracting method of the present embodiment is by shape response factor
It determines final prenasale, final left and right auriculare is determined by the apposition direction of prenasale normal and auriculare normal.Its
Middle shape response factor is calculated by following below scheme:
(1) the Gaussian curvature k on curved surface at a discrete point is calculatedHWith average curvature kG;
(2) the principal curvatures k at the point is calculated by formula (12), (13)1And k2, wherein k1≥k2;
(3) the shape response factor S of the point is calculated by formula (14).
Curved surface more protrudes, and is similar at tip of the nose, then shape response factor is bigger;Curved surface is more recessed, be similar to it is bowl-shape or
Point in ear canal, then shape response factor is smaller.According to this characteristic, the maximum point of selected shape response factor in nose Candidate Set
As prenasale, the smallest point of selected shape response factor is used as auriculare in the ear canal Candidate Set of left and right.Because of left and right ear canal
The normal of point is located at the left and right sides of prenasale normal, so outside by normal at normal at calculating auriculare and prenasale
Long-pending direction, using the right-hand rule, thumb is right ear canal point downwards, and thumb is left auriculare upwards.
The above content is combining, specific preferred embodiment is made for the present invention to be described in detail, and it cannot be said that the present invention
Specific implementation is only limitted to these explanations.For those skilled in the art to which the present invention belongs, structure of the present invention is not being departed from
Under the premise of think of, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to the present invention and is wanted by the right submitted
The invention protection scope for asking book to determine.
Claims (6)
1. a kind of three-dimensional human head point cloud feature extracting method, it is characterised in that: comprising:
Step 1: data preprocessing: firstly, using the center of gravity of all the points in each voxel of the point cloud data come approximate generation
Down-sampling is carried out for other points in voxel, to reduce the point Yun Midu of the point cloud data;Secondly, the value model that setting is certain
It encloses, deletes neck noise data below in the point cloud data;Finally, passing through the neighbor point quantity within the scope of given search radius
Less than the method for given threshold value, by the point deletion that peels off in the point cloud data;
Step 2: the training stage: calculating the normal of each point in pretreated point cloud data: it is pretreated to be equal to calculating
Every tangent plane normal in point cloud data, is converted to and decomposes to the covariance matrix C in formula (1), corresponding covariance square
The feature vector of battle array C minimal eigenvalue is as every in pretreated point cloud data normal;Make all normals by formula (2)Unanimously towards viewpoint direction;Wherein, PiIt is i-th point in pretreated point cloud data of data, k is PiThe number of neighbor point
Mesh,It is PiThe three-dimensional mass center of all neighbor points, vpFor viewpoint,
After obtaining normal, FPFH value is calculated by following below scheme:
One, for each query point P in pretreated point cloud datat, this query point is calculated by formula (3), (4), (5)
PtWith its neighborhood point PsBetween uvw coordinate system;
U=nsFormula (3)
W=u × v formula (5)
Wherein, nsFor query point PtNeighborhood point PsNormal, two, by formula (6), (7), (8) calculate query point PtNormal nt
With neighborhood point PsNormal nsBetween one group of misalignment angle α,θ, this result are known as simplified point feature histogram SPFH;
α=vntFormula (6)
θ=arctan (wnt, unt) formula (8)
Wherein, d is query point PtTo neighborhood point PsBetween linear distance;
Three, the k neighborhood for redefining each point, uses neighbor point PkSPFH value, query point P is determined by formula (9)t's
FPFH value FPFH (Pt);
Wherein, it obtains in pretreated point cloud data after the FPFH value of each point, K dimension tree construction is established by using FLANN
Data, K be pretreated point cloud data midpoint number;
Step 3: manually selecting characteristic feature point from setting models, setting models can select the public shape of face of given data, warp
Setting and normal same in step 2 and FPFH value calculating parameter are crossed, the FPFH value of characteristic feature point is calculated;
Step 4: inquiry phase: the chi-square value in statistics indicates the extent of deviation of observation and theoretical value, will be from setting models
The middle obtained characteristic feature point that extracts is as theoretical value, using the point in trained model K dimension tree construction to be extracted as observation
After calculating chi-square value one by one, by setting certain threshold value, the Candidate Set being made of several similitudes is can be obtained in value;Card
Side value χ2It is calculated by formula (10), (11),
Wherein, QiFor theoretical value, PiThe point in tree construction, E are tieed up for trained model K to be extractediIt is corresponding PiDesired value;
Step 5: determining characteristic point based on shape response factor, the principal curvatures of each point in Candidate Set is calculated, and then calculates shape
Shape response factor.
2. three-dimensional human head point cloud feature extracting method according to claim 1, it is characterised in that: characteristic feature point packet
Include tip of the nose characteristic feature point, left side ear canal characteristic feature point, right side ear canal characteristic feature point.
3. three-dimensional human head point cloud feature extracting method according to claim 2, it is characterised in that: tip of the nose is typical special
The final configuration nose Candidate Set of sign point, left side ear canal characteristic feature point, right side ear canal characteristic feature point finally configure left and right and pick up the ears
Road Candidate Set;Characteristic is arranged: a corresponding curved surface, curved surface more protrude in each Candidate Set, are similar at tip of the nose, then shape is rung
Answer the factor bigger;Curved surface is more recessed, and is similar to point in bowl-shape or ear canal, then shape response factor is smaller;According to this characteristic, in nose
The maximum point of selected shape response factor is used as prenasale in sharp Candidate Set, and selected shape responds in left and right sides ear canal Candidate Set
The smallest point of the factor is used as two auriculares, by calculating the direction of normal and normal apposition at prenasale at auriculare, distinguishes
Left and right auriculare.
4. three-dimensional human head point cloud feature extracting method according to claim 3, it is characterised in that: left and right auriculare
Normal be located at the left and right sides of prenasale normal, pass through the side for calculating normal apposition at normal and prenasale at auriculare
To using the right-hand rule, thumb is right ear canal point downwards, and thumb is left auriculare upwards.
5. three-dimensional human head point cloud feature extracting method according to claim 1, it is characterised in that: in each Candidate Set
A corresponding curved surface, shape response factor are calculated by following steps:
1. calculating the Gaussian curvature k on curved surface at a discrete pointHWith average curvature kG;
2. calculating the principal curvatures k at the discrete point by formula (12), (13)1And k2, wherein k1≥k2;
3. calculating the shape response factor S of the discrete point by formula (14);
6. a kind of three-dimensional human head point cloud feature deriving means, using as claimed in any of claims 1 to 5
Three-dimensional human head point cloud feature extracting method, it is characterised in that: the device includes:
Preprocessing module is used for the pretreatment of point cloud data;
Training module is used to calculate the normal of each point in pretreated point cloud data, and then calculates FPFH value, and be based on
FPFH value establishes the data that K ties up tree construction by the quick library of approximate KNN, and K is at pretreated point cloud data midpoint
Number;
The FPFH value computing module of characteristic feature point, is used to manually select characteristic feature point, setting models from setting models
The public shape of face that can select given data is calculated by setting and normal same in step 2 and FPFH value calculating parameter
To the FPFH value of characteristic feature point;
Candidate generation module, being used for chi-square value in statistics indicates the extent of deviation of observation and theoretical value, will be from giving
The characteristic feature point extracted in cover half type makees the point in trained model K dimension tree construction to be extracted as theoretical value
For observation, after calculating chi-square value one by one, by setting certain threshold value, the candidate being made of several similitudes can be obtained
Collection;
Characteristic point determining module is used to determine characteristic point based on shape response factor, calculates the master of each point in Candidate Set
Curvature, and then calculate shape response factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610741174.2A CN106407985B (en) | 2016-08-26 | 2016-08-26 | A kind of three-dimensional human head point cloud feature extracting method and its device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610741174.2A CN106407985B (en) | 2016-08-26 | 2016-08-26 | A kind of three-dimensional human head point cloud feature extracting method and its device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106407985A CN106407985A (en) | 2017-02-15 |
CN106407985B true CN106407985B (en) | 2019-09-10 |
Family
ID=58003046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610741174.2A Active CN106407985B (en) | 2016-08-26 | 2016-08-26 | A kind of three-dimensional human head point cloud feature extracting method and its device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106407985B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980878B (en) * | 2017-03-29 | 2020-05-19 | 深圳大学 | Method and device for determining geometric style of three-dimensional model |
CN108154525B (en) * | 2017-11-21 | 2022-06-07 | 四川大学 | Bone fragment splicing method based on feature matching |
CN108428219B (en) * | 2018-02-28 | 2021-08-31 | 华南农业大学 | Log diameter measuring and calculating method based on three-dimensional curved surface |
CN109818924A (en) * | 2018-12-21 | 2019-05-28 | 深圳科安达电子科技股份有限公司 | A kind of device of the login railway dedicated system based on recognition of face |
CN110458174B (en) * | 2019-06-28 | 2023-05-09 | 南京航空航天大学 | Precise extraction method for key feature points of unordered point cloud |
CN110633749B (en) * | 2019-09-16 | 2023-05-02 | 无锡信捷电气股份有限公司 | Three-dimensional point cloud identification method based on improved viewpoint feature histogram |
CN111061360B (en) * | 2019-11-12 | 2023-08-22 | 北京字节跳动网络技术有限公司 | Control method and device based on user head motion, medium and electronic equipment |
CN111145166B (en) * | 2019-12-31 | 2023-09-01 | 北京深测科技有限公司 | Security monitoring method and system |
CN112036814A (en) * | 2020-08-18 | 2020-12-04 | 苏州加非猫精密制造技术有限公司 | Intelligent production management system and management method based on 3D retrieval technology |
CN112418030B (en) * | 2020-11-11 | 2022-05-13 | 中国标准化研究院 | Head and face model classification method based on three-dimensional point cloud coordinates |
CN117576408A (en) * | 2023-07-06 | 2024-02-20 | 北京优脑银河科技有限公司 | Optimization method of point cloud feature extraction method and point cloud registration method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011085435A1 (en) * | 2010-01-14 | 2011-07-21 | The University Of Sydney | Classification process for an extracted object or terrain feature |
CN102622776A (en) * | 2011-01-31 | 2012-08-01 | 微软公司 | Three-dimensional environment reconstruction |
CN102779358A (en) * | 2011-05-11 | 2012-11-14 | 达索系统公司 | Method for designing a geometrical three-dimensional modeled object |
CN103430218A (en) * | 2011-03-21 | 2013-12-04 | 英特尔公司 | Method of augmented makeover with 3d face modeling and landmark alignment |
-
2016
- 2016-08-26 CN CN201610741174.2A patent/CN106407985B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011085435A1 (en) * | 2010-01-14 | 2011-07-21 | The University Of Sydney | Classification process for an extracted object or terrain feature |
CN102622776A (en) * | 2011-01-31 | 2012-08-01 | 微软公司 | Three-dimensional environment reconstruction |
CN103430218A (en) * | 2011-03-21 | 2013-12-04 | 英特尔公司 | Method of augmented makeover with 3d face modeling and landmark alignment |
CN102779358A (en) * | 2011-05-11 | 2012-11-14 | 达索系统公司 | Method for designing a geometrical three-dimensional modeled object |
Also Published As
Publication number | Publication date |
---|---|
CN106407985A (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106407985B (en) | A kind of three-dimensional human head point cloud feature extracting method and its device | |
CN106682598B (en) | Multi-pose face feature point detection method based on cascade regression | |
CN108549873B (en) | Three-dimensional face recognition method and three-dimensional face recognition system | |
CN107168527B (en) | The first visual angle gesture identification and exchange method based on region convolutional neural networks | |
CN108427871A (en) | 3D faces rapid identity authentication method and device | |
CN110852182B (en) | Depth video human body behavior recognition method based on three-dimensional space time sequence modeling | |
CN106022228B (en) | A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth | |
Li et al. | 3D object recognition and pose estimation from point cloud using stably observed point pair feature | |
CN110008913A (en) | Pedestrian re-identification method based on fusion of attitude estimation and viewpoint mechanism | |
CN103177269A (en) | Equipment and method used for estimating object posture | |
CN102779269A (en) | Human face identification algorithm based on image sensor imaging system | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN111985332B (en) | Gait recognition method of improved loss function based on deep learning | |
CN103489011A (en) | Three-dimensional face identification method with topology robustness | |
CN110472495A (en) | A kind of deep learning face identification method based on graphical inference global characteristics | |
CN116091570B (en) | Processing method and device of three-dimensional model, electronic equipment and storage medium | |
CN104794441A (en) | Human face feature extracting method based on active shape model and POEM (patterns of oriented edge magnituedes) texture model in complicated background | |
CN108256440A (en) | A kind of eyebrow image segmentation method and system | |
CN109598261B (en) | Three-dimensional face recognition method based on region segmentation | |
CN116883472B (en) | Face nursing system based on face three-dimensional image registration | |
CN116884045B (en) | Identity recognition method, identity recognition device, computer equipment and storage medium | |
CN109886091A (en) | Three-dimensional face expression recognition methods based on Weight part curl mode | |
CN115994944A (en) | Three-dimensional key point prediction method, training method and related equipment | |
CN111814760B (en) | Face recognition method and system | |
CN109784241B (en) | Stable palm print image feature enrichment area extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |