AU2021105639A4 - Head and face type classification method based on three-dimensional point cloud coordinates - Google Patents

Head and face type classification method based on three-dimensional point cloud coordinates Download PDF

Info

Publication number
AU2021105639A4
AU2021105639A4 AU2021105639A AU2021105639A AU2021105639A4 AU 2021105639 A4 AU2021105639 A4 AU 2021105639A4 AU 2021105639 A AU2021105639 A AU 2021105639A AU 2021105639 A AU2021105639 A AU 2021105639A AU 2021105639 A4 AU2021105639 A4 AU 2021105639A4
Authority
AU
Australia
Prior art keywords
head
data
points
face
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2021105639A
Inventor
Huimin Hu
Jing Liu
Jianwei Niu
Linghua Ran
Xin Zhang
Chaoyi ZHAO
Yulin ZHOU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
China National Institute of Standardization
Original Assignee
OF SCIENCE AND TECHNOLOGY, University of
University of Science and Technology of China USTC
China National Institute of Standardization
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OF SCIENCE AND TECHNOLOGY, University of, University of Science and Technology of China USTC, China National Institute of Standardization filed Critical OF SCIENCE AND TECHNOLOGY, University of
Application granted granted Critical
Publication of AU2021105639A4 publication Critical patent/AU2021105639A4/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a head and face type classification method based on three dimensional point cloud coordinates, including step 1: collecting three-dimensional point cloud data of a head and face; step 2: defining key parameters to get radius of key 5 points; step 3: processing data to obtain a final head and face data model; step 4: completing head type classification by adopting principal component analysis according to the final head and face data model; the step 3 includes: step 31: removing noise in the collected three-dimensional point cloud data of the head and face; step 32: adjusting coordinates of each mesh vertex of the data model for the denoised point cloud 10 data; step 33: filling hole; step 34: smoothing. The present invention takes the shape information and surface information of the head and face into full consideration according to the point cloud data obtained by three-dimensional scanning; and carries out the data processing by selecting data processing template, has effect of data repair and noise reduction, improves analysis efficiency; at the same time, improves the 15 accuracy of the head type classification by carrying out the principal component analysis of point cloud coordinates creatively. 24 collecting three-dimensional point cloud data of a head and face 2 defining key parameters to get radius key points __ 3 processing data to obtain a final head nd face data model completing head type classification by adopting principal component analysis according to the final head and face d;a model FIG. 1 Y 46 1(61) 31 0 X 16 FIG. 2 1/3

Description

collecting three-dimensional point cloud data of a head and face
2
defining key parameters to get radius key points
__ 3
processing data to obtain a final head nd face data model
completing head type classification by adopting principal component analysis according to the final head and face d;a model
FIG. 1
Y 46
1(61) 31 0 X
16
FIG. 2 1/3
HEAD AND FACE TYPE CLASSIFICATION METHOD BASED ON THREE-DIMENSIONAL POINT CLOUD COORDINATES FIELD
[0001] The invention involves in the technical field of head type classification, and
particularly involves in a head and face type classification method based on three
dimensional point cloud coordinates, and.
BACKGROUND
[0002] Measurement and observation of human head and face can reflect
morphological characteristics of head and face, which are important indicators for
studies of human population genetics. At the same time, accuracy of data of the head
and face and accurate type classification based on the data of the head and face are also
very important to design of head and face products. Traditional head and face features
are mostly described by one-dimensional or two-dimensional data, which can only
represent the length, width and circumference of the head and face, however, shape and
curve of the head and face are very complex, and neither the one-dimensional or the
two-dimensional data can fully reflect shape and surface information between
measuring points of the human head and face surface. It is very inaccurate to classify
type of the human head according to the one-dimensional or the two-dimensional data.
[0003] At present, the head type classification in China is mainly based on GB/T
2428-1998 (Head and Face Dimensions of Chinese Adults)) and GB/T 23461-2009
(Three-dimensional Dimensions ofAdult Male Head Type)). GB/T 2428-1998 ((Head
and Face Dimensions of Chinese Adults)) is based on small sample measurement data,
carries out regression of measured data from 1987 to 1988, provides relationship between one-dimensional dimension data and a small amount of two-dimensional dimension data, and mainly focuses on two-dimensional graphic design applications.
GB/T 23461-2009 (Three-Dimensional Dimensional of Adult Male Head Type)) is based on two-dimensional distribution of head width-length and head height-length
index of adult male head, only gives three-dimensional dimensions of Chinese adult
male head, and has certain limitations in use.
[0004] Chinese patent application with application No.CN102125323A discloses a method for formulating head models ofjuveniles between 4 years old and 12 years old based on coverage rate of characteristic parameters of three-dimensional images, which
includes: measuring: scanning the images of a plurality of juveniles between 4 years
old and 12 years old by using a three-dimensional human body scanner and extracting the characteristic parameters of the images by using three-dimensional scanning
software; preprocessing: preprocessing the extracted characteristic parameters, namely,
establishing and editing a file including the human-body head size data of the juveniles between 4 years old and 12 years old, detecting and processing abnormal data of the
human-body size and rejecting unqualified samples; and formulating sizes: formulating
a statistic table according to the characteristic parameters, computing the coverage rate among the characteristic parameters according to the statistic table and setting models
for the characteristic parameters with coverage rate being greater than or equal to 0.5oo
while no model being set for the characteristic parameters with coverage rate being
smaller than or equal to 0.5o .Although the characteristic parameters of the head of juveniles are extracted by three-dimensional scanning, the characteristic parameters
being extracted are still limited to head length and head circumference, so shape information and surface information of the head and face are not taken into full
consideration when classifying head type.
SUMMARY
[0005] In order to solve the above technical problems, the present invention
provides a head and face type classification method based on three-dimensional point cloud coordinates, which comprehensively considers the head and face shape information and surface information to realize the type classification of human head.
[0006] A head and face type classification method based on three-dimensional point cloud coordinates includes:
[0007] step 1: collecting three-dimensional point cloud data of a head and face;
[0008] step 2: defining key parameters to get radius of key points;
[0009] step 3: processing data to obtain a final head and face data model;
[0010] step 4: completing head type classification by adopting principal component analysis according to the final head and face data model.
[0011] Preferably, in the step 1, the head and face are scanned by a three
dimensional scanning equipment to obtain the three-dimensional point cloud data of the head and face.
[0012] In any of the above technical solutions, preferably, the step 2 includes:
[0013] step 21: extracting equally spaced cross sections from human head and face;
[0014] step 22: reading coordinates of each key point of each cross section to get
the radius of each key point.
[0015] In any of the above technical solutions, preferably, in step 22, for each cross section of the head and face, the number of the key points is N+1, the N+1 key points
surround a center point of the cross section and are located on an edge curve of the cross
section, the first key point overlaps with (N+1)th key point, an angle between a line from the ith key point to the central point of the cross section and a line from the (i+1)th
key point to the central point of the cross section is 360 /N, i 1 and i GN.
[0016] In any of the above technical solutions, preferably, according to the key parameters defined in the step 2, a way to layer the point cloud data is defined, and then
the step 3 is performed by matching an appropriate data processing template.
[0017] In any of the above technical solutions, preferably, the step 3 includes:
[0018] step 31: removing noise in the collected three-dimensional point cloud data
of the head and face;
[0019] step 32: adjusting coordinates of each mesh vertex of the data model for the
denoised point cloud data;
[0020] step 33: filling hole;
[0021] step 34: smoothing.
[0022] In any of the above technical solutions, preferably, the step 31 includes determining a first class noise points and a second class noise points based on a distance
calculation method.
[0023] In any of the above technical solutions, preferably, the first class noise points are points which are inconsistent distribution with the real data points of the human
head and are far away from the real data points of the human head, a method of determining the first class noise points is: in the point cloud data of the human head,
selecting any point and finding points whose distance in a neighborhood of the point
exceeds a threshold, setting the points exceeding the threshold as the first class noise points; and deleting the first class noise points in data pre-processing.
[0024] In any of the above technical solutions, preferably, the second class noise
points are points whose data are partially overlapped, for the registrated point cloud data of the human head, which are divided into two parts by a Y-Z plane, two points Ao
and Bo with the smallest distance on an edge of the point cloud are found, zA and ZB are
used to represent Z coordinates of the two points, if ZA ZB, the two points are set as new edge points; if ZA<ZB, A 1and B 1 are used to represent next edge data points of Ao
and Bo respectively, a distance dl between Al and BO and a distance d2 between Bi
and Ao are calculated, if di>d2, Ao and B1 are set as new edge points, otherwise Ai and Bo are set as new edge points; after the new edge points are determined, data points
between the new edge points and the original edge points are set as the second class
noise points, the second class noise points are deleted in data pre-processing.
[0025] In any of the above technical solutions, preferably, the step 32 includes:
[0026] step 321: calculating coordinates of a geometric center point of the data
model according to a formula y - , wherein n is the total number of the
mesh vertexes, and Vi is coordinates of a mesh vertex in three-dimensional space;
[0027] step 322: calculating a translational transformation matrix Mm from the geometric center point to a coordinate origin, and obtaining the maximum widths in x, y and z directions respectively, then calculating a rotational transformation matrix Mr, wherein the rotational transformation matrix Mr makes the maximum width in they direction;
[0028] step 323: obtaining an affine transformation matrix / according to the
translational transformation matrix Mm and the rotational transformation matrix
Mr , M=Mm x Mr;
[0029] step 324: adjusting the coordinate of each mesh vertex in the data model
according to a formula VIN = Vi X M(i = 0,1, -*- , n)to obtain a data model with
adjusted mesh vertex coordinates, wherein VN represents the adjusted coordinates of
a mesh vertex in the three-dimensional space.
[0030] In any of the above technical solutions, preferably, a mesh model is a triangular mesh model.
[0031] In any of the above technical solutions, preferably, in the step 33, filling hole is to resample the data model after adjusting coordinates of the mesh vertex and
reconstruct the data model for a situation of missing data on the top of the head during
process of collecting three-dimensional point cloud data of the head and face.
[0032] In any of the above technical solutions, preferably, the step 33 includes:
[0033] step 331: marking a long axis and a short axis of a data missing area on the top of the head;
[0034] step 332: defining a set of planes with normal direction parallel to the long axis, wherein a set of parallel plane slice curves is formed by the set of planes intersecting with a surface on the top of the head;
[0035] step 333: each plane slice curve intersecting with a scanning curve formed by layered three-dimensional point cloud data of the head and face to obtain two
intersections, using the intersections as a part of data points of re-fitting curve to
perform data interpolation to obtain a complete data model of the head and face.
[0036] In any of the above technical solutions, preferably, in the step 34, a position of each vertex on the layered scanning curve formed by data interpolation is adjusted
to achieve smoothing, thereby obtaining a final data model of the head and face.
[0037] In any of the above technical solutions, preferably, the step 4 includes:
[0038] step 41: assuming that a set of sample points of the point cloud data in the
final data model of the head and face is P = (Pt P2, ..., pn)T, wherein p, = (xi,y,zi)T,
n is the number of the sample points;
[0039] step 42: selecting three-dimensional coordinates xi, yi, z- of a sample
point Pi as three indexes denoted as Xp X2, X3;
[0040] step 43: searching a target point pt (xt, yt, zt)T for a set of
neighborhood points Pt, =: (Pi P2, -, pA)Tthereof, k is the number of points in the
neighborhood, and calculating distance di from each neighborhood point to the
target point and a mean distance d;
[0041] step 44: seeking a linear combination of the three indexes X X2 , X3
, (Y 1 = aX1 + a1 2 X 2 + a1 3 X 3 y2 = a2 X 1 + a2 2 X 2 + a X 3 3 Y3 = a 3 1 X 1 + a 3 2 X 2 + a33 X3
[0042] satisfying a condition 2 2 2 ail + ai2 + ai3=1
[0043] Y1 , Y2, Y3 are not related to each other , and then getting a Var(Y 1) Var(Y2 ) Var(Y3 )
r 1 1 a 12 a 1 31 matrix C3x 3 a 21 a2 2 a2 3 wherein i=1,2,3.
a31 a32 a33
[0044] step 45: calculating three feature vectors of the matrix C3x3which are
expressed from small to large as 4- , A 2 , and then obtaining a local feature descriptor of the target point LFD = ;
[0045] step 46: establishing a principal component analysis panel to complete the
type classification according to results of principal component analysis.
[0046] The head and face type classification method based on three-dimensional
point cloud coordinates of the present invention takes the shape information and surface
information of the head and face into full consideration according to the point cloud
data obtained by three-dimensional scanning; and carries out the data processing by
selecting data processing template, has effect of data repair and noise reduction,
improves analysis efficiency; at the same time, improves the accuracy of the head type
classification by carrying out the principal component analysis of point cloud
coordinates creatively.
BRIEF DESCRIPTION OF THE DRAWINGS
[0047] FIG. 1 is a schematic flow diagram of a preferred embodiment of a head and
face type classification method based on three-dimensional point cloud coordinates
according to the present invention.
[0048] FIG. 2 is a schematic diagram of key points defined for a cross section in
the embodiment shown in FIG. 1 of the head and face type classification method based
on three-dimensional point cloud coordinates according to the present invention.
[0049] FIG. 3 is a schematic diagram of filling hole for missing data on the top of
the head in the embodiment shown in FIG. 1 of the head and face type classification
method based on three-dimensional point cloud coordinates according to the present
invention.
[0050] FIG. 4 is a schematic diagram of a complete data model of head and face
obtained by filling hole in the embodiment shown in FIG. 1 of the head and face type
classification method based on three-dimensional point cloud coordinates according to
the present invention.
[0051] FIG. 5 is a schematic diagram of a method of smoothing in the embodiment
shown in FIG. 1 of the head and face type classification method based on three dimensional point cloud coordinates according to the present invention.
[0052] FIG. 6 is a schematic diagram of head type classification in the embodiment
shown in FIG. 1 of the head and face type classification method based on three dimensional point cloud coordinates according to the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0053] In order to make the present invention be better understood, it is described in detail in combination with embodiments.
[0054] Embodiment 1
[0055] As shown in FIG. 1, a head and face type classification method based on three-dimensional point cloud coordinates includes:
[0056] step 1: collecting three-dimensional point cloud data of a head and face;
[0057] step 2: defining key parameters to get radius of key points;
[0058] step 3: processing data to obtain a final head and face data model;
[0059] step 4: completing head type classification by adopting principal component analysis according to the final head and face data model.
[0060] In the step 1, the head and face are scanned by a three-dimensional scanning
equipment to obtain the three-dimensional point cloud data of the head and face.
[0061] In this embodiment, the number of the point cloud data obtained for a whole
human head is between 38,420 to 56,200, wherein the number of the point cloud data
for the upper part of the human head (above two ears) is between 15,223 to 24,221, and the number of the point cloud data for a human face (defined as a range of 7/2 of the
forward part on the human head) is between 2,809 to 3,977.
[0062] The step 2 includes:
[0063] step 21: extracting equally spaced cross sections from human head and face;
[0064] step 22: reading coordinates of each key point of each cross section to get
the radius of each key point; in step 22, for each cross section of the head and face, the number of the key points is N+1, the N+1 key points surround a center point of the
cross section and are located on an edge curve of the cross section, the first key point overlaps with (N+1)th key point, an angle between a line from the ith key point to the central point of the cross section and a line from the (i+1)th key point to the central point of the cross section is 360 /N, i 1and i <N.
[0065] The human head is an uneven curved surface object, if the cross sections of the head is extracted with the same distance, the cross sections at different parts have
characteristics of being different in perimeter, center point, curvature of arc, and etc., but adjacent cross sections have strong similarity and coherence, especially in a shape
of the arc of the cross section. By defining the key points, features of the cross sections of the human head can be characterized through the key points, and the head can be
divided into multiple perimeter layers according to the key points.
[0066] In this embodiment, the cross sections of the head is extracted with the same distance of 2mm, as shown in FIG. 2, the center point of one cross section is defined as
a origin of a plane coordinate axis, 61 key points are defined to characterize the feature
of the cross section of the human head, the 61 key points are successively connected in four quadrants, and the beginning and the end overlap, that is, the key points 1 and 61
are the same point, increasing clockwise; the key points 1 (61), 16, 31 and 46 intersect
a X axis and a Y axis of the plane coordinate axis respectively. the key point1 intersects the X axis in the positive direction of the X axis, in the clockwise direction, the key
point 7 is at a 36 angle to the X axis, by analogy, the key point 16 intersects the Y axis
in the negative direction ofY axis, the key point 31 intersects the X axis in the negative direction of X axis, and the key point 31 and the key point1 are symmetrical about the
Y axis, the key point 46 intersects the Y axis in the positive direction, and key point 61
is at a 3600 angle to the positive direction of the X axis, and overlaps with key point 1; the key points 1 and 31 are the lateral center points of the cross section of the human
head, and the key points 16 and 46 are the front and rear center points of the cross
section of the human head; the 61 key points are connected to the origin, and the circumference of the cross section is divided into 60 arc segments, the center angle
corresponding to each arc segment is about 6 , and the arc edges of each arc segment are the distances form two adjacent key points to the origin.
[0067] According to the key parameters defined in the step 2, a way to layer the point cloud data is defined, and then the step 3 is performed by matching an appropriate data processing template.
[0068] The initial point cloud data of the human head obtained by non-contact measurement are scattered, with holes, noise points and not smooth, so it is necessary to process the point cloud data.
[0069] The step 3 includes:
[0070] step 31: removing noise in the collected three-dimensional point cloud data of the head and face;
[0071] step 32: adjusting coordinates of each mesh vertex of the data model for the
denoised point cloud data;
[0072] step 33: filling hole;
[0073] step 34: smoothing.
[0074] Due to influence of scanning environment and hardware equipment of
system, noise is inevitably generated during measurement. Points which are far away from the real data points of the human are called a first class noise points, points which
are partially overlapped with any other point are called a second class noise points. For
the first class noise points and the second class noise points, a method of point selecting and deleting is used, that is, whether a point of the human point cloud data is noise or
not is determined, and if it is, it is deleted directly.
[0075] In the step 31, the first class noise points and the second class noise points are determined based on a distance calculation method. A method of determining the
first class noise points is: in a neighborhood of any point, another point which is the
closest to this point is seek, and a distance between the two points is calculated, if the distance exceeds a set threshold, this point is the first class noise point, and delete the
point cloud data of this point. A method of determining and deleting the second class
noise points is: the second class noise points are those which are partially overlapped with any other point, for the point cloud data of the human head after registration, a Y
Z plane is used to divide it into two parts, two points Ao and Bo with the smallest distance on an edge of the point cloud are found, ZA and ZB are used to represent Z coordinates of the two points, if ZA ZB, the two points are set as new edge points; if
ZA<ZB, A 1and B 1are used to represent next edge data points of Ao and Bo respectively, a distance di between A1 and Bo and a distance d2 between B1 and Ao are calculated, if
di>d2, Ao and B1 are set as new edge points, otherwise A1 and Bo are set as new edge points; after the new edge points are determined, data points between the new edge
points and the original edge points are set as the second class noise points, the second class noise points are deleted in data pre-processing. In this embodiment, the threshold
may be set to a theoretical distance value during laser scanning, that is, a layer distance during laser scanning of the human head during data collecting. Because coordinate
systems of the point cloud data of the head obtained by scanning are not completely
consistent, all data must be converted to a same coordinate system, and this requires data registration of the point cloud of the human head; During the data registration, for
each sample, a z direction is from a origin to the top of the head, a y direction is from
the origin to the nose tip, a x direction is consistent with a direction of cross product of the z direction and the y direction, a vector connecting a centroid of the sample and the
nose tip is defined, taking the vector as a benchmark, all the samples of the human head
are rotated around the z direction to make their orientation of the nose tip are consistent. The next edge data points of the human head means points with a second smallest
distance on the front edge and the back edge of scan lines of the head correspondingly.
[0076] The three-dimensional point cloud data obtained by scanning will be arranged in any position and direction when it is initially read-in, at this time, the data
model cannot be displayed correctly in the system, and it will bring a lot of
inconvenience to the next processing. Therefore, after denoising the data model obtained by scanning, the coordinates of each mesh vertex of the data model is adjusted
to match the template.
[0077] The step 32 includes:
[0078] step 321: calculating coordinates of a geometric center point of the data
model according to a formula - ,wherein n is the total number of the
mesh vertexes, and Vi is coordinates of a mesh vertex in three-dimensional space;
[0079] step 322: calculating a translational transformation matrix Mm from the
geometric center point to a coordinate origin, and obtaining the maximum widths in x,
y and z directions respectively, then calculating a rotational transformation matrix Mr
, wherein the rotational transformation matrix Mr makes the maximum width in the y
direction;
[0080] step 323: obtaining an affine transformation matrix M according to the
translational transformation matrix Mm and the rotational transformation matrix
Mr, IM=Mm X Mr;
[0081] step 324: adjusting the coordinate of each mesh vertex in the data model according to a formula ViN = Vi X M(i = 0,1, - , n)to obtain a data model with
adjusted mesh vertex coordinates, wherein ViN represents the adjusted coordinates of
a mesh vertex in the three-dimensional space.
[0082] In this embodiment, the positive direction of the X direction is defined as
the left side of the data model and the negative direction is defined as the right side of
the data model. A model of the mesh is a triangular mesh model.
[0083] The triangular mesh model is defined as two-tuples M= (K, V), wherein
V=fvo, vi,-, vm}, Vi E R 3 represents the coordinates of the vertex of the
mesh model in three-dimensional space; K is a simple complex representing a topology
structure of the mesh, and each simple complex contains a set of simplicial complex VO, VO I V1 VO I V1, V 2 }, which are respectively defined as the vertex,
connecting edges and triangular faces of the mesh in R3 .
[0084] The triangular mesh model is composed of geometric information and topological information, basic topological entities expressing form and structure includes:
[0085] 1) Vertex. A position of the vertex is represented by a three dimensional
(geometric) point. Vertexes are the most basic element of the triangular mesh model, other elements are directly or indirectly formed by vertexes.
[0086] 2) Edge. The edge is intersection of two adjacent faces, a direction of the edge is from a starting vertex to an ending vertex.
[0087] 3) Loop. The loop is a closed boundary composed of ordered and directed edges. In the loop, the starting vertex of each edge coincides with the ending vertex of
the previous edge, and the ending vertex of each edge coincides with the starting vertex of the next edge, forming a closed loop with the same direction of each edge. The loop
is classified to an inner loop and an outer loop according to directions, the edges of the inner loop are connected clockwise, and the edges of the outer loop are connected counterclockwise.
[0088] 4) Face. The face is a triangular area enclosed by the closed loop. The face is also directional, a direction of the face is determined by a boundary loop, the face
surrounded by the outer loop is with an outward normal vector and is called a forward
face; the face surrounded by the inner loop is with an inward normal vector and is called a reverse face. The triangular mesh model, each face on the surface of the model is the
forward face, and each face inside the model is the reverse face.
[0089] The following relationships are satisfied by the topological entities of the triangular mesh model:
[0090] 1) a triangular face only intersects with other triangle faces on its edges;
[0091] 2) each inside edge only have two triangular faces adjacent to it;
[0 0921 3) the edge on boundary only has one triangular face adjacent to it.
[0093] The three-dimensional scanning results in layered data points, which can't completely cover a whole human head, therefore, it is necessary to resample the data for the data model after adjusting the coordinates of the mesh vertex and reconstruct the data model of the human head, that is, perform the step 33 filling hole.
[0094] As shown in FIG. 3, FIG. 3 (a) is a schematic diagram of the top of the head of the data model after adjusting the coordinates of the mesh vertex, it can be seen that
looking from the top of the head, the missing data encloses a blank area which is similar to an ellipse.
[0095] In the step 33, which is firstly performed is:
[0096] s t ep 3 3 1: marking a long axis and a short axis of a data missing area on the top of the head, as shown in FIG. 3 (a); then which is performed is:
[0097] step 332: defining a set of planes with normal direction parallel to the long axis, wherein a set of parallel plane slice curves is formed by the set of planes intersecting with a surface on the top of the head, as shown in FIG. 3 (b), which is
finally performed is:
[0098] step 333: each plane slice curve intersecting with a scanning curve formed by layered three-dimensional point cloud data of the head and face to obtain two intersections, using the intersections as a part of data points of re-fitting curve to perform data interpolation to obtain a complete data model of the head and face.
[0099] In this embodiment, the step 333 is performed by means of cubic B-spline curve and interpolation curve masking method.
[00100] Each slice curve intersects with the scanning curve formed by the original layered data points to obtain two intersection points, the intersections are used as part of data points of re-fitting curve, data interpolation is carried out. m +1 data points
which are expressed as Qo, Qi, ... , Q, are interpolated between the two intersection
points of one slice curve. Then a cubic B-spline curve equation of m+1 interpolation data points Qi (i=0,1,...,m) can be written as:
P(u)=ZDB 3 (u)= ZDB ,3 (u) u e [U, u i]c [u, u ] J O J~i-3
wherein Dj is a control vertex of the curve, Bj, 3 (u) is a B-spline basis function on a
node vector U=[uo,ui,...,un4] defined by a cubic curve. According to requirements of
endpoint interpolation and curve definition domain, an endpoint condition of multiple nodes with 4 nodes at both ends of the definition domain is adopted, that is:
UO=U1=2=3=0, Un+1= Un+2=Un+3=un.4=1. By parameterizing gauge cumulative chord lengths of data points Qi(i=0,,...,m),a sequence of parameter values (i=0,l,...,m)
is obtained,
J=1
Correspondingly, it is obtained that node values in the definition domain meetsU3+i=ui
(i=0,1,...,m) . A beginning point Po(0)=Qo=Do, and an end point Pm(l)=Qm=D+1,
node values in the curve definition domain u e [ug, U 1 I C [U 3 1 Z+] are
successively substituted into an equation, which should meet a interpolation conditions, that is:
P(u )= DN n (u)=Q,i=3,4,-
j=-3 P~s= A y(m= Q.,n=m+2
[00101] The above formula contains m+1=n-1 equations which are not enough to determine n+1 unknown control vertexes included in the formula, so two additional
equations given by boundary conditions should be added. In order to ensure that an
interpolated curve is continuous with an original curve C1, the beginning and end points are the same as the original curve, by this way, the endpoint conditions of the
interpolated curve can be obtained:
PO'(0) = 3(D 1 - DO) = 3D, - 3Q
PJ(1)= 3(D,, -Dn)=3Q. -3D,
[00102] For quasi-uniform B-spline curves, the coefficient matrix of interpolation is
constant because the quasi-uniform B-spline curves have a fixed basis function coefficient matrix. The quasi-uniform cubic B-spline curves can be rewritten into a
matrix as follows:
3 0 D Q 7/12 1/6 D4 D P +3Q, 1/6 2/3 1/6 D2 Q1
1/6 2/3 1/6 - 1/6 7/12 1/4 D, 3Q.
0 3 -Dh Q.
[00103] All unknown control vertexes can be obtained by solving the above linear
equations. Thus, the quasi-uniform cubic B-spline curves expressed by the equations
are completely determined.
[00104] The surface data generated by an interpolation curve masking method is
used to fill the missing data on the top of the human head, and a complete data model
of the head and face is obtained, as shown in FIG. 4.
[00105] In the step 34, a position of each vertex on the layered scanning curve
formed after data interpolation is adjusted to achieve a purpose of smoothing, and then
a final data model of the head and face is obtained.
[00106] In this embodiment, a Laplacian smoothing method is used to smooth the
curves, the principle is shown in FIG. 5, according to geometric information around
each vertex, the Laplacian smoothing method adjusts its position to achieve a purpose
of smoothing, and then the final data model of the head and face is obtained. The
algorithm of the Laplacian smoothing method is as follows: n/2 old old
,?tew =Vd + ~ (i;- i- (L- Vi ) ,0 < A < I vL" =Lfs+A( j=-n/2
[00107] The amount of point cloud data in final data model of the head and face is
very large, in order to find out the most important component and structure in the point
cloud data and remove influence of redundant data on type classification, principal
component analysis is conducted on the point cloud data of final data model of the head
and face to reduce a dimension of the data and reveal a simple structure of the data.
[00108] The step 4 includes:
[00109] step 41: assuming that a set of sample points of the point cloud data in the
final data model of the head and face is ""Pp" wherein Pi= (xiyz)T
n is the number of the sample points;
[00110] step 42: selecting three-dimensional coordinates Xti,1 zi- of a sample
point as three indexes denoted as ' 2' ";
[00111] step 43: searching a target point Pt = (Xtt, zt)T for a set of
neighborhood points ('P2, -Pk )thereof, k is the number of points in the
neighborhood, and calculating distance from each neighborhood point to the
target point and a mean distance d;
[00112] step 44: seeking a linear combination of the three indexes XI , X21 X3,
YI1 1 a X + 1a 2 X + a 1 3X 3 Y= a2 1 X1 + az2 X2 + a2 3 X3 Y= a3 1 X 1 + a 2 X 2 + a3 3 X3
[00113] satisfying a condition 2 2 2 ai + ai2 + ai3= Y1 , Y2 , Y 3 are not related to each other Var(Y 1) Var(Y2 ) Var(Y3 ) and then getting a matrix
ra 1 1 a12 a 1 31 C3x3 = a2 a21 z2 az 3 a 31 a3203 , wherein i=1,2,3;
[00114] step 45: calculating three feature vectors of the matrix c3X which are
expressed from small to large as AO' ' , and then obtaining a local feature
LFD = O descriptor of the target point +
[00115] step 46: establishing a principal component analysis panel to complete the
type classification according to results of principal component analysis.
[00116] In the step 45, LFD is taken as a description dimension of a local feature of
a point, the smaller the LFD value is, the smaller the change of a local area where the
point is located in an observation direction in three-dimensional space is. The larger the LFD value is, the more obvious morphological characteristics of the local area where
the point located are. Principal component analysis (PCA) is used to analyze main
change patterns of the human head and face, and results show that spatial shape of the human head and face is composed of a small number of basis vectors.
[00117] In this embodiment, the results of the principal component analysis in the step 46 are used to establish a principal component analysis panel as shown in FIG. 6,
which contains a probability ellipse, and the probability ellipse is divided into four units
by two lines. Based on the type classification, four types of head model which are small, short and wide, long and narrow, big are established, wherein small, short and wide,
long and narrow, big represent population samples in unit 1, 2, 3 and 4 respectively.
According to the results of sample analysis, the slope of the line dividing the principal component analysis panel into units 1, 2 and units 3, 4 is about 0.40767.
[0 0 118] Embodiment 2:
[0 01191 According to the results of principal component analysis, a more detailed principal component analysis panel can be established to conduct more detailed analysis
of head type, such as classifying the head type into 8 types or even more; type of face
can also be classified. The results of classification can provide basis for design of head and face products, such as helmet, mask, goggles, protective masks and other products.
[00120] It should be noted that the above embodiments are used only to describe the
technical solutions of the present invention but not to limit it; although the foregoing embodiments describe the present invention in details, persons skilled in this field
should understand that: the technical solutions recorded in the foregoing embodiments
may be modified, part or all of the technical features of the technical solutions may be equivalent replaced, but such replacement does not make the nature of the
corresponding technical solutions divorced from the scope of the technical solutions of the present invention.

Claims (10)

CLAIMS What is claimed is:
1. A head and face type classification method based on three-dimensional point cloud coordinates, comprising:
step 1: collecting three-dimensional point cloud data of a head and face;
step 2: defining key parameters to get radius of key points; step 3: processing data to obtain a final head and face data model;
step 4: completing head type classification by adopting principal component analysis according to the final head and face data model; is characterized in that: the
step 3 comprises:
step 31: removing noise in the collected three-dimensional point cloud data of the
head and face; step 32: adjusting coordinates of each mesh vertex of the data model for the
denoised point cloud data; step 33: filling hole;
step 34: smoothing.
2. The head and face type classification method based on three-dimensional point cloud coordinates according to claim 1, is characterized in that: the step 2 comprises:
step 21: extracting equally spaced cross sections from human head and face;
step 22: reading coordinates of each key point of each cross section to get the radius of each key point; in step 22, for each cross section of the head and face, the
number of the key points is N+l, the N+1 key points surround a center point of the
cross section and are located on an edge curve of the cross section, the first key point overlaps with (N+)th key point, an angle between a line from the ith key point to the
central point of the cross section and a line from the (i+1)th key point to the central
point of the cross section is 360 /N, i 1 and i <N.
3. The head and face type classification method based on three-dimensional point
cloud coordinates according to claim 2, is characterized in that: according to the key parameters defined in the step 2, a way to layer the point cloud data is defined, and then the step 3 is performed by matching an appropriate data processing template.
4. The head and face type classification method based on three-dimensional point cloud coordinates according to claim 1, is characterized in that: the step 31 comprises
determining a first class noise points and a second class noise points based on a distance
calculation method.
5. The head and face type classification method based on three-dimensional point
cloud coordinates according to claim 4, is characterized in that: the first class noise points are points which are inconsistent distribution with the real data points of the
human head and are far away from the real data points of the human head, a method of
determining the first class noise points is: in the point cloud data of the human head, selecting any point and finding points whose distance in a neighborhood of the point
exceeds a threshold, setting the points exceeding the threshold as the first class noise
points; and deleting the first class noise points in data pre-processing.
6. The head and face type classification method based on three-dimensional point
cloud coordinates according to claim 4, is characterized in that: the second class noise
points are points whose data are partially overlapped, for the registrated point cloud data of the human head, which are divided into two parts by a Y-Z plane, two points Ao
and Bo with the smallest distance on an edge of the point cloud are found, ZA an d ZB
ar e us e d t o represent Z coordinates of the two points, if ZA : ZB , the two points are
set as new edge points; ifZA<ZB,A andBi are used to represent next edge data points
of Ao and Bo respectively, a distance di between Ai and Bo and a distance d2 between
B 1 and Ao a r e calculated, if d i & d 2 , Ao and B 1 are set as new edge points, otherwise Ai and Bo are set as new edge points; after the new edge points are determined, data
points between the new edge points and the original edge points are set as the second
class noise points, the second class noise points are deleted in data pre-processing.
7. The head and face type classification method based on three-dimensional point
cloud coordinates according to claim 1, is characterized in that: the step 32 comprises: step 321: calculating coordinates of a geometric center point of the data model
Z" V according to a formula V - 1=0 , wherein n isthetotal numberof the mesh
vertexes, and Vi is coordinates of a mesh vertex in three-dimensional space;
step 322: calculating a translational transformation matrix Mm from the
geometric center point to a coordinate origin, and obtaining the maximum widths in x,
y and z directions respectively, then calculating a rotational transformation matrix Mr,
wherein the rotational transformation matrix Mr makes the maximum width in they
direction;
step 323: obtaining an affine transformation matrix [ according to the
translational transformation matrix Mm and the rotational transformation matrix
M, , M=MM X Mr;
step 324: adjusting the coordinate of each mesh vertex in the data model according
to a formula ViN =Vi X M(i = 0,1, - , n)to obtain a data model with adjusted
mesh vertex coordinates, wherein ViN represents the adjusted coordinates of a mesh
vertex in the three-dimensional space.
8. The head and face type classification method based on three-dimensional point
cloud coordinates according to claim 1, is characterized in that: the step 33 comprises:
step 331: marking a long axis and a short axis of a data missing area on the top of the head;
step 332: defining a set of planes with normal direction parallel to the long axis,
wherein a set of parallel plane slice curves is formed by the set of planes intersecting with a surface on the top of the head;
step 333: each plane slice curve intersecting with a scanning curve formed by
layered three-dimensional point cloud data of the head and face to obtain two intersections, using the intersections as a part of data points of re-fitting curve to
perform data interpolation to obtain a complete data model of the head and face.
9. The head and face type classification method based on three-dimensional point cloud coordinates according to claim 1, is characterized in that: in the step 34, a position of each vertex on the layered scanning curve formed by data interpolation is adjusted to achieve smoothing, thereby obtaining a final data model of the head and face.
10. The head and face type classification method based on three-dimensional point cloud coordinates according to claim 9, is characterized in that: the step 4 comprises: step 41: assuming that a set of sample points of the point cloud data in the final datamodel of the head and face is P = (P..2, ...,p.)T, wherein p1= (xy, z)T,nis the number of the sample points; step 42: selecting three-dimensional coordinates xi, yi, z- of a sample point Pi as three indexes denoted as X1 , X2 , X3 ; step 43: searching a target point Pt = (Xt,, zt) Tfor a set of neighborhood points Ptn = (P1, P2, . . , pk) thereof, k is the number of points in the neighborhood, and calculating distance di from each neighborhood point to the target point and a mean distance d; step 44: seeking a linear combination of the three indexes X1 , X 2 , X3
Y = a1 1 X 1 + a 1 2 X2 + a1 3 X 3 Y2 = a2 1 X 1 + a 2 X 2 + a2 3 X 3 Y =- a 3 1 X 1 + a 3 2 X 2 + a3 3 X3
ai2 1 + a2 2 2= +a3 1 satisfying a condition Y1 , Y, 3 are not related to each other , and Var(Y 1) Var(Y2 ) Var(Y3 )
a11 a12 a13 then getting a matrix C 3 x3 a 21 a2 2 a23 , wherein i=1,2,3; a 31 a 32 a33
step 45: calculating three feature vectors of the matrix C3x3 which are expressed
from small to large as a0, A , and then obtaining a local feature descriptor of the target point LFD = .AO+A, ±A 2 step 46: establishing a principal component analysis panel to complete the type classification according to results of principal component analysis.
collecting three-dimensional point cloud data of a head and face 2 2021105639
defining key parameters to get radius of key points
3
processing data to obtain a final head and face data model
completing head type classification by 4 adopting principal component analysis according to the final head and face data model
FIG. 1
FIG. 2 1/3
′a″ ′b″
FIG. 3
FIG. 4
FIG. 5
2/3
AU2021105639A 2020-11-11 2021-03-16 Head and face type classification method based on three-dimensional point cloud coordinates Active AU2021105639A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011254896.8 2020-11-11
CN202011254896.8A CN112418030B (en) 2020-11-11 2020-11-11 Head and face model classification method based on three-dimensional point cloud coordinates

Publications (1)

Publication Number Publication Date
AU2021105639A4 true AU2021105639A4 (en) 2021-10-21

Family

ID=74781112

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021105639A Active AU2021105639A4 (en) 2020-11-11 2021-03-16 Head and face type classification method based on three-dimensional point cloud coordinates

Country Status (3)

Country Link
CN (1) CN112418030B (en)
AU (1) AU2021105639A4 (en)
WO (1) WO2022099958A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491718A (en) * 2022-01-26 2022-05-13 广西路桥工程集团有限公司 Geological profile multi-segment line optimization method and system for finite element analysis
CN115345908A (en) * 2022-10-18 2022-11-15 四川启睿克科技有限公司 Human body posture recognition method based on millimeter wave radar
CN116503429A (en) * 2023-06-28 2023-07-28 深圳市华海天贸科技有限公司 Model image segmentation method for biological material 3D printing

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418030B (en) * 2020-11-11 2022-05-13 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates
CN117576408A (en) * 2023-07-06 2024-02-20 北京优脑银河科技有限公司 Optimization method of point cloud feature extraction method and point cloud registration method
CN118037601A (en) * 2024-04-07 2024-05-14 法奥意威(苏州)机器人系统有限公司 Point cloud filling method and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592136B (en) * 2011-12-21 2013-10-16 东南大学 Three-dimensional human face recognition method based on intermediate frequency information in geometry image
CN103544733B (en) * 2013-10-24 2017-01-04 北京航空航天大学 The three-dimensional human head triangular mesh model method for building up analyzed based on Statistical Shape
CN106407985B (en) * 2016-08-26 2019-09-10 中国电子科技集团公司第三十八研究所 A kind of three-dimensional human head point cloud feature extracting method and its device
CN106780591B (en) * 2016-11-21 2019-10-25 北京师范大学 A kind of craniofacial shape analysis and Facial restoration method based on the dense corresponding points cloud in cranium face
CN107767457B (en) * 2017-10-09 2021-04-06 东南大学 STL digital-analog generating method based on point cloud rapid reconstruction
US10832472B2 (en) * 2018-10-22 2020-11-10 The Hong Kong Polytechnic University Method and/or system for reconstructing from images a personalized 3D human body model and thereof
CN109671505B (en) * 2018-10-25 2021-05-04 杭州体光医学科技有限公司 Head three-dimensional data processing method for medical diagnosis and treatment assistance
CN111710036B (en) * 2020-07-16 2023-10-17 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for constructing three-dimensional face model
CN112418030B (en) * 2020-11-11 2022-05-13 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491718A (en) * 2022-01-26 2022-05-13 广西路桥工程集团有限公司 Geological profile multi-segment line optimization method and system for finite element analysis
CN114491718B (en) * 2022-01-26 2023-03-24 广西路桥工程集团有限公司 Geological profile multi-segment line optimization method and system for finite element analysis
CN115345908A (en) * 2022-10-18 2022-11-15 四川启睿克科技有限公司 Human body posture recognition method based on millimeter wave radar
CN116503429A (en) * 2023-06-28 2023-07-28 深圳市华海天贸科技有限公司 Model image segmentation method for biological material 3D printing
CN116503429B (en) * 2023-06-28 2023-09-08 深圳市华海天贸科技有限公司 Model image segmentation method for biological material 3D printing

Also Published As

Publication number Publication date
WO2022099958A1 (en) 2022-05-19
CN112418030B (en) 2022-05-13
CN112418030A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
AU2021105639A4 (en) Head and face type classification method based on three-dimensional point cloud coordinates
Patil et al. An adaptive approach for the reconstruction and modeling of as-built 3D pipelines from point clouds
Ke et al. Feature-based reverse modeling strategies
Bénière et al. A comprehensive process of reverse engineering from 3D meshes to CAD models
Woo et al. A new segmentation method for point cloud data
CN110516388A (en) Surface tessellation point cloud model ring cutting knife rail generating method based on reconciliation mapping
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
Pagani et al. Towards a new definition of areal surface texture parameters on freeform surface: Re-entrant features and functional parameters
AU2012350138A1 (en) Method and system for characterising plant phenotype
Zhang Discrete shape modeling for geometrical product specification: contributions and applications to skin model simulation
CN103236043A (en) Plant organ point cloud restoration method
Zhu et al. 3D reconstruction of plant leaves for high-throughput phenotyping
Zhang et al. A framework for automated construction of building models from airborne Lidar measurements
Yu et al. An automatic form error evaluation method for characterizing micro-structured surfaces
Kumar et al. Computing non-self-intersecting offsets of NURBS surfaces
CN111428811A (en) Method for recognizing and processing self-intersection pattern of single-ring polygon
Musuvathy et al. Computing medial axes of generic 3D regions bounded by B-spline surfaces
CN112989453B (en) BIM-based holographic deformation information extraction method
Zhang et al. Object defect detection based on data fusion of a 3D point cloud and 2D image
CN115147433A (en) Point cloud registration method
CN111599016B (en) Point cloud error calculation method
Norgard et al. Ridge–Valley graphs: Combinatorial ridge detection using Jacobi sets
Xin et al. Accurate and complete line segment extraction for large-scale point clouds
Ji et al. Point cloud segmentation for complex microsurfaces based on feature line fitting
Kudelski et al. Feature line extraction on meshes through vertex marking and 2D topological operators

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)