WO2022099958A1 - Head-face dimension classification method based on three-dimensional point cloud coordinates - Google Patents

Head-face dimension classification method based on three-dimensional point cloud coordinates Download PDF

Info

Publication number
WO2022099958A1
WO2022099958A1 PCT/CN2021/080982 CN2021080982W WO2022099958A1 WO 2022099958 A1 WO2022099958 A1 WO 2022099958A1 CN 2021080982 W CN2021080982 W CN 2021080982W WO 2022099958 A1 WO2022099958 A1 WO 2022099958A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
points
data
point
face
Prior art date
Application number
PCT/CN2021/080982
Other languages
French (fr)
Chinese (zh)
Inventor
冉令华
钮建伟
周玉霖
刘静
赵朝义
张欣
呼慧敏
Original Assignee
中国标准化研究院
北京科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国标准化研究院, 北京科技大学 filed Critical 中国标准化研究院
Publication of WO2022099958A1 publication Critical patent/WO2022099958A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Definitions

  • the invention relates to the technical field of head type classification, in particular to a head and face type classification method based on three-dimensional point cloud coordinates.
  • the measurement and observation items of the human head and face can reflect the morphological characteristics of the head and face, which are important indicators of human population genetics research.
  • the accuracy of head and face data and accurate size classification based on head and face data are also crucial to the design of head and face products.
  • the traditional head and face features are mostly described by one-dimensional or two-dimensional data, so they can only represent the length, width, and circumference of the head and face.
  • the shape and curve of the head and face are very complex, and neither one-dimensional or two-dimensional data can fully reflect it.
  • the size classification of human head based on one-dimensional or two-dimensional data has great inaccuracy.
  • the domestic head shape classification is mainly based on GB/T 2428-1998 "Chinese Adult Head and Facial Dimensions" and GB/T 23461-2009 "Adult Male Head Shape Three-Dimensional Dimensions”.
  • GB/T 2428-1998 Head and Facial Dimensions of Chinese Adults is based on the measurement data of a small sample, the data measured in 1987-1988 is regressed, and the relationship between one-dimensional size data and a small amount of two-dimensional size data is provided, mainly focusing on two-dimensional size data.
  • Chinese patent application CN102125323A discloses a method for making head shape of minors aged 4-12 based on the coverage rate of three-dimensional image feature parameters, which includes: a measuring step, scanning a number of minors aged 4-12 through a three-dimensional body scanner image, and then use 3D scanning software to extract feature parameters of the image; preprocessing step, preprocess the extracted feature parameters, that is, create and edit 4-12 minor human head size data files and detect and processing abnormal body size data, and remove the unqualified samples; in the step of size formulation, formulate a statistical table according to the characteristic parameters, and then calculate the coverage rate between the characteristic parameters according to the statistical table, and do not set the coverage rate less than 5 ⁇ model, if it is greater than or equal to 5 ⁇ , set the model.
  • this method extracts the head feature parameters of minors through three-dimensional scanning, the extracted feature parameters are still limited to the head length and head circumference, so the shape information of the head and face is not fully considered when classifying the head size. and surface information.
  • the present invention provides a head and face shape classification method based on three-dimensional point cloud coordinates, which comprehensively considers the shape information and surface information of the head and face, and realizes the classification of the head shape of the human body.
  • a head and face shape classification method based on three-dimensional point cloud coordinates comprising the steps of:
  • Step 1 Collect 3D point cloud data of head and face
  • Step 2 Define key parameters and get the radius of key points
  • Step 3 Data processing to obtain the final head and face data model
  • Step 4 According to the final head and face data model, use principal component analysis to complete the head size classification.
  • step 1 the head and face are scanned by a three-dimensional scanning device to obtain three-dimensional point cloud data of the head and face.
  • any of the above-mentioned schemes includes sub-steps in step 2:
  • Step 21 Extract the equidistant cross-sections of the human head and face
  • Step 22 Read the coordinates of each key point of each cross section, and obtain the radius at each key point.
  • step 22 for each cross-section of the head and face, the number of the key points is defined as N+1, and the N+1 key points surround the center point of the cross-section and are located at the edge of the cross-section.
  • the first keypoint overlaps with the N+1th keypoint, and the connection between the ith keypoint and the center point of the cross section and the connection between the i+1th keypoint and the center point of the cross section.
  • the included angle of the lines is 360°/N, i ⁇ 1 and i ⁇ N.
  • step 2 defines a method for layering the scanned point cloud data, and then proceed to step 3 by matching a suitable data processing template.
  • the step 3 comprises:
  • Step 31 Denoise the collected 3D point cloud data of the head and face
  • Step 32 Adjust the coordinates of each grid vertex of the data model for the denoised point cloud data
  • Step 33 fill the hole
  • Step 34 Smoothing.
  • the step 31 includes determining the first type of noise point and the second type of noise point based on a distance calculation method.
  • the first type of noise points are points that are inconsistent with the real data point distribution of the human head and are far away
  • the method for determining the first type of noise points is: point cloud data on the human head Concentration, arbitrarily select a point, find the points in the neighborhood of the point whose distance exceeds the set threshold, and set the points outside these thresholds as the first type of noise points; the first type of noise points are removed during data preprocessing.
  • the second type of noise point is a point where some data overlap
  • the method for determining and deleting the second type of noise point is: for the registered human body point cloud, use the YZ plane to segment the head. It is divided into two parts. Find two points A 0 and B 0 with the smallest distance on the edge of the point cloud. The z-coordinates of the two points are represented by z A and z B respectively.
  • step 32 comprises the steps:
  • Step 321 According to the formula Calculate the geometric center point coordinates of the data model, where n is the total number of grid vertices, and V i is the position coordinates of the grid model vertices in the three-dimensional space;
  • Step 322 Calculate the translation transformation matrix M m from the geometric center point to the coordinate origin, and obtain the maximum widths in the x, y, and z directions respectively, and then calculate the rotation transformation matrix M r , the rotation transformation matrix M r makes the y direction has a maximum width;
  • the mesh model is a triangular mesh model.
  • step 33 filling the hole is to re-sample the data model after the adjustment of the grid vertex coordinates for the situation that the top of the head data is missing in the process of collecting the three-dimensional point cloud data of the head and face, and reconstruct the data model.
  • step 33 specifically includes:
  • Step 331 Mark the long axis and short axis of the missing area of the overhead data
  • Step 332 define a group of planes whose normal direction is parallel to the long axis, and the group of planes intersects with the top surface of the head to form a group of parallel plane slice curves;
  • Step 333 each plane slice curve intersects the scan curve formed by the layered 3D point cloud data of the head and face obtained by scanning to obtain two intersection points, and use these intersection points as part of the data points of the re-fitting curve, perform data interpolation, and obtain a complete The head and face data model.
  • step 34 the position of each vertex on the layered scanning curve formed after data interpolation is adjusted to achieve smoothing, and then obtain the final head and face data model.
  • the step 4 includes:
  • Step 42 Select the three-dimensional coordinates x i , y i , and zi of the sample point p i as three indicators, denoted as X 1 , X 2 , X 3 ;
  • Step 44 Seek the linear combination of 3 indicators X 1 , X 2 , X 3
  • Step 45 Calculate the three eigenvectors of the matrix C 3 ⁇ 3 , which are expressed as ⁇ 0 , ⁇ 1 , and ⁇ 2 in order from small to large, and then obtain the local feature descriptor of the target point
  • Step 46 Establish a principal component analysis panel according to the principal component analysis result, and complete the size classification.
  • the head and face shape classification method based on three-dimensional point cloud coordinates of the present invention comprehensively considers the shape information and surface information of the head and face according to the point cloud data obtained by three-dimensional scanning;
  • the noise reduction effect improves the analysis efficiency; at the same time, the principal component analysis is creatively carried out for the point cloud coordinates, which improves the accuracy of the head size classification.
  • FIG. 1 is a schematic flowchart of a preferred embodiment of a method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
  • FIG. 2 is a schematic diagram of key points defined for a certain cross section in the embodiment shown in FIG. 1 according to the three-dimensional point cloud coordinate-based head and face shape classification method of the present invention.
  • FIG. 3 is a schematic diagram of filling holes for missing data on the top of the head of the embodiment shown in FIG. 1 according to the method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
  • Fig. 4 is a schematic diagram of a complete head and face data model obtained after filling holes according to the embodiment as shown in Fig. 1 of the head and face shape classification method based on three-dimensional point cloud coordinates of the present invention.
  • FIG. 5 is a schematic diagram of the smoothing method of the embodiment shown in FIG. 1 according to the method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
  • FIG. 6 is a schematic diagram of the head shape classification of the embodiment shown in FIG. 1 according to the three-dimensional point cloud coordinate-based head and face shape classification method of the present invention.
  • a method for classifying head and face shape based on three-dimensional point cloud coordinates includes the steps:
  • Step 1 Collect 3D point cloud data of head and face
  • Step 2 Define key parameters and get the radius of key points
  • Step 3 Data processing to obtain the final head and face data model
  • Step 4 According to the final head and face data model, use principal component analysis to complete the head size classification.
  • step 1 the head and face are scanned with a three-dimensional scanning device to obtain three-dimensional point cloud data of the head and face.
  • the number of point clouds obtained for the entire head is between 38,420 and 56,200, and the number of point clouds on the upper head (more than two ears) is between 15,223 and 24,221.
  • the number of point clouds in the forward ⁇ /2 range) is between 2809 and 3977.
  • Step 2 includes sub-steps:
  • Step 21 Extract the equidistant cross-sections of the human head and face
  • Step 22 Read the coordinates of each key point of each cross section, and obtain the radius at each key point.
  • the number of the key points is defined as N+1, and the N+1 key points surround the center point of the cross-section and are located on the edge curve of the cross-section.
  • the first key point Overlapping with the N+1th key point, the angle between the line connecting the i-th key point and the center point of the cross section and the line connecting the i+1-th key point and the center point of the cross section is 360°/ N, i ⁇ 1 and i ⁇ N.
  • the human head is an uneven surface body. If the cross-sections of the head are extracted at equal intervals, the cross-sections of different parts have the characteristics of different perimeters, different center points, and different arc curvatures, but the adjacent cross-sections have different characteristics. There is a strong similarity and coherence, especially in the arc shape of the cross section.
  • the characteristics of the cross-section of the human head can be represented by the key points, and the head can be divided into multiple contour layers according to the key points.
  • the head is extracted with equal intervals of 2mm.
  • the center point of the cross-section is defined as the origin of the plane coordinate axis
  • 61 key points are defined to represent the cross-section of the human head.
  • 61 key points are connected in turn in four quadrants, overlapping end to end, that is, key point 1 and key point 61 are the same point, increasing clockwise; key points 1 (61), 16, 31, 46 are respectively associated with the plane
  • the X and Y axes of the coordinate axes intersect, and the key point 1 and the X axis intersect in the positive direction of the X axis.
  • the key point 7 forms an angle of 36° with the X axis, and so on, and the key point 16 intersects the Y axis.
  • the key point 31 and the X axis intersect in the negative direction of the X axis, and are symmetrical with the key point 1 about the Y axis, the key point 46 intersects with the positive direction of the Y axis, and the key point 61 and the positive X axis.
  • the key point 1 and the key point 31 are the lateral center points of the lateral section of the human head, and the key point 16 and the key point 46 are the front and rear center points of the lateral section of the human head;
  • 61 key points are connected to the origin, the contour of the cross section is divided into 60 segments, the arc center angle corresponding to the arc length of each segment is about 6°, and the arc sides of the arc length are the distances from two adjacent key points to the origin. .
  • step 2 define the method of layering the scanned point cloud data, and then proceed to step 3 by matching the appropriate data processing template.
  • the initial point cloud data of the human head obtained by non-contact measurement is scattered, has holes, noise points and is not smooth, so it is necessary to process the point cloud data.
  • the step 3 includes:
  • Step 31 Denoise the collected 3D point cloud data of the head and face
  • Step 32 Adjust the coordinates of each grid vertex of the data model for the denoised point cloud data
  • Step 33 fill the hole
  • Step 34 Smoothing.
  • the selection deletion method is adopted, that is, it is judged whether the human body data is a noise point, and if so, it is directly deleted.
  • the first type of noise point and the second type of noise point are determined based on the distance calculation method.
  • the method of determining the first type of noise point is: find the point closest to the point in the neighborhood of a point and calculate the distance between the two points, if the distance between the two points is greater than the set threshold, then the The point is the first type of noise point; delete the point cloud data of this point.
  • the method for determining and deleting the second type of noise points is as follows: the second type of noise points are points in the point cloud where part of the data overlaps.
  • the threshold value may be set as the theoretical spacing value during laser scanning, that is, the interlayer spacing during laser scanning of the human head during the data collection process. Since the coordinate systems of the multiple head point cloud data obtained by scanning are not completely consistent, all data must be converted to the same coordinate system, which requires data registration of the human head point cloud; during registration, For each sample, the z-axis direction is from the origin to the top of the head, the y-axis direction is from the origin to the tip of the nose, the direction of the x-axis is consistent with the direction of the cross product of the z-axis and the y-axis, and the vector connecting the sample centroid and the tip of the nose is defined, Based on this vector, all head samples are rotated around the z-axis so that the nose tip is aligned in the same direction. The next head data point refers to the point with the next smallest distance on the edge of the scan line corresponding to the front and back of the head.
  • the 3D point cloud data obtained by scanning is initially read in at any position and in any direction.
  • the data model at this time cannot be displayed in the system correctly, and it will bring a lot of inconvenience to the future processing process. Therefore, after denoising the data model obtained by scanning, the coordinates of each mesh vertex in the model are adjusted, and then the template is matched.
  • Step 32 includes the steps:
  • Step 321 According to the formula Calculate the geometric center point coordinates of the data model, where n is the total number of grid vertices, and V i is the position coordinates of the grid model vertices in the three-dimensional space;
  • Step 322 Calculate the translation transformation matrix M m from the geometric center point to the coordinate origin, and obtain the maximum widths in the x, y, and z directions respectively, and then calculate the rotation transformation matrix M r , the rotation transformation matrix M r makes the y direction has a maximum width;
  • the positive direction of the x-axis is defined as the left side of the data model, and the negative direction is defined as the right side of the data model.
  • the mesh model used is a triangular mesh model.
  • each simplicial complex contains a set of simplex ⁇ v 0 ⁇ , ⁇ v 0 , v 1 ⁇ , ⁇ v 0 , v 1 , v 2 ⁇ , They are mesh vertices, connecting edges and triangular patches defined on R3 , respectively.
  • the triangular mesh model consists of two parts: geometric information and topological information.
  • the basic topological entities that express the shape include:
  • Vertex The positions of vertices are represented by three-dimensional (geometric) points. Points are the most basic elements of a triangular mesh model, and other elements are directly or indirectly formed by points.
  • Edge An edge is the intersection of two adjacent faces, and the direction of the edge is from the start vertex to the end vertex.
  • a ring is a closed boundary composed of ordered, directed edges.
  • the start point of each edge in the ring coincides with the end point of the previous side, and the end point coincides with the start point of the latter side, forming a closed loop with all sides in the same direction.
  • the ring is divided into inner and outer directions. The edges in the inner ring are connected in a clockwise direction, and the edges in the outer ring are connected in a counterclockwise direction.
  • a polygon is a triangular area enclosed by a closed loop.
  • the surface also has directionality. The direction of the surface is determined by the boundary ring.
  • the normal vector of the surface enclosed by the outer ring boundary is outward, which is called the forward surface; the normal vector of the surface enclosed by the inner ring boundary is inward, which is called the reverse surface.
  • each face of the model surface is the forward face, and the inner face is the reverse face.
  • the topological entities of the triangular mesh model satisfy the following relationships:
  • a triangle can only intersect with other triangles on its sides;
  • a boundary edge can only have one adjacent triangle.
  • the 3D scanning obtains layered data points, which cannot completely cover the entire human head. Therefore, it is necessary to resample the data model after the grid vertex coordinates are adjusted to reconstruct the human head model, that is, perform step 33 to fill the hole.
  • Figure 3(a) is a schematic diagram of the top position of the data model after the grid vertex coordinates are adjusted. It can be seen that from the top of the head, the missing data is surrounded by a blank area that is approximately an ellipse.
  • Step 331 Mark the long axis and short axis of the missing area of the overhead data, as shown in Figure 3(a); then execute:
  • Step 332 Define a set of planes whose normal direction is parallel to the long axis, and this set of planes intersects with the top surface of the head to form a set of parallel plane slice curves, as shown in Figure 3(b); finally execute:
  • Step 333 each plane slice curve intersects the scan curve formed by the layered 3D point cloud data of the head and face obtained by scanning to obtain two intersection points, and use these intersection points as part of the data points of the re-fitting curve, perform data interpolation, and obtain a complete The head and face data model.
  • step 333 is performed by means of a cubic B-spline curve and an interpolation curve masking method.
  • Each slice curve intersects the scan curve formed by the original layered data points to obtain two intersection points, which are used as part of the data points of the re-fitted curve to perform data interpolation.
  • Interpolate m+1 data points between two intersection points of the same plane slice curve, and the interpolated data points are denoted as Q 0 , Q 1 , . . . , Q m .
  • D j is the control vertex of the curve
  • the node value in the definition domain is obtained
  • the node values inside the equation are substituted into the equation in turn, and the interpolation conditions should be satisfied, that is:
  • the first and last points of the curve are the same as the original curve, so that the endpoint conditions of the interpolation curve can be obtained:
  • the surface data generated by the interpolation curve masking method is used to supplement the missing data at the top of the human head model to obtain a complete head and face data model, as shown in Figure 4.
  • step 34 the position of each vertex on the layered scanning curve formed by the data interpolation is adjusted to achieve the purpose of smoothing, and then the final head and face data model is obtained.
  • the Laplacian smoothing method is used to smooth the curve.
  • the principle is shown in Figure 5.
  • the algorithm adjusts the position of each vertex according to the geometric information around the vertex to achieve the purpose of smoothing, and then obtains the final result.
  • the head and face data model The Laplacian smoothing algorithm is as follows:
  • the amount of point cloud data in the final head and face data model is very large.
  • the data is subjected to principal component analysis to perform data dimensionality reduction and reveal the simple structure of the data.
  • Step 4 includes:
  • Step 42 Select the three-dimensional coordinates x i , y i , and zi of the sample point p i as three indicators, denoted as X 1 , X 2 , X 3 ;
  • Step 44 Seek the linear combination of 3 indicators X 1 , X 2 , X 3
  • Step 45 Calculate the three eigenvectors of the matrix C 3 ⁇ 3 , which are expressed as ⁇ 0 , ⁇ 1 , and ⁇ 2 in order from small to large, and then obtain the local feature descriptor of the target point
  • Step 46 Establish a principal component analysis panel according to the principal component analysis result, and complete the size classification.
  • the LFD is used as the local feature description dimension of a point.
  • the principal component analysis result in step 46 is used to build a principal component analysis panel as shown in FIG. 6 , which contains one probability ellipse, and two lines divide the probability ellipse into four units.
  • a principal component analysis panel as shown in FIG. 6 , which contains one probability ellipse, and two lines divide the probability ellipse into four units.
  • four types of head models are established: small, short/wide, long/narrow and large, where small, long/narrow, short/wide and large represent the head models located in units 1, 2, 3 and 4, respectively. population sample.
  • the slope of the line dividing 1, 2 and 3, 4 units is about 0.40767.
  • a more refined principal component analysis panel can be established to conduct a more detailed analysis of the size, such as dividing the head size into 8 medium or even more medium; Sort by size.
  • the results of the classification can provide a basis for the design of head and face products, such as helmets, masks, goggles, protective masks and other product design models.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A head-face dimension classification method based on three-dimensional point cloud coordinates, comprising: step 1: acquiring head-face three-dimensional point cloud data (1); step 2: defining a key parameter to obtain a radius at a key point (2); step 3: performing data processing to obtain a final head-face data model (3); and step 4: according to the final head-face data model, completing head dimension classification using principal component analysis (4), wherein step 3 comprises: step 31: denoising the acquired head-face three-dimensional point cloud data; step 32: for the denoised point cloud data, adjusting data model grid vertex coordinates; step 33: performing hole filling; and step 34: performing smoothing. According to the method, head-face shape information and curved surface information are comprehensively considered according to point cloud data obtained by three-dimensional scanning; by selecting a data processing template to complete a data processing procedure, data recovery and noise reduction effects are achieved, and the analysis efficiency is improved; in addition, principal component analysis is creatively performed on point cloud coordinates, so that the accuracy of head dimension classification is improved.

Description

一种基于三维点云坐标的头面部号型分类方法A head and face shape classification method based on three-dimensional point cloud coordinates 技术领域technical field
本发明涉及头部型号分类技术领域,具体涉及一种基于三维点云坐标的头面部号型分类方法。The invention relates to the technical field of head type classification, in particular to a head and face type classification method based on three-dimensional point cloud coordinates.
背景技术Background technique
人体头面部的测量及观察项目可以反映出头面部的形态特征,这些特征是人类群体遗传学研究的重要指标。同时,头面部数据的准确性以及依据头面部数据进行准确的号型分类对头面部用品的设计也至关重要。传统的头面部特征多采用一维或者二维数据进行描述因而只能表现头面部的长度、宽度、围度特征,而头面部的形状和曲线十分复杂,一维或者二维数据均不能充分体现人体头面部表面测量点间的形状信息和曲面信息。依据一维或者二维数据对人体头部进行号型分类具有很大的不准确性。The measurement and observation items of the human head and face can reflect the morphological characteristics of the head and face, which are important indicators of human population genetics research. At the same time, the accuracy of head and face data and accurate size classification based on head and face data are also crucial to the design of head and face products. The traditional head and face features are mostly described by one-dimensional or two-dimensional data, so they can only represent the length, width, and circumference of the head and face. The shape and curve of the head and face are very complex, and neither one-dimensional or two-dimensional data can fully reflect it. The shape information and surface information between the measurement points of the human head and face surface. The size classification of human head based on one-dimensional or two-dimensional data has great inaccuracy.
目前,国内头部号型分类主要依据GB/T 2428-1998《中国成年人头面部尺寸》和GB/T 23461-2009《成年男性头型三维尺寸》。GB/T 2428-1998《中国成年人头面部尺寸》基于小样本测量数据,对1987-1988年测量的数据进行了回归,提供了一维尺寸数据和少量的二维尺寸数据关系,主要侧重于二维平面设计应用;GB/T 23461-2009《成年男性头型三维尺寸》基于成年男性头宽长和头高长指数的二维分布,仅给出了中国成年男性头型三维尺寸,具有一定的使用局限性。中国专利申请CN102125323A公开了一种基于三维影像特征参数覆盖率的4-12岁未成年人头部号型制定方法,其包括:测量步骤,通过三维人体扫描仪扫描若干4-12岁未成年人的影像,再用三维扫描软件对所该影像进行特征参数的提取;预处理步骤,对提取的特征参数进行预处理,即建立与编辑4-12未成年人人体头部尺寸数据文件以及检出与处理人体尺寸异常数据,剔除其中的不合格样本;号型制定步骤,根据特征参数制定统计表,然后根据该统计表计算特征参数之间的 覆盖率,对于覆盖率小于5‰的则不设置型号,大于等于5‰的则设置型号。该方法虽然通过三维扫描提取了未成年人的头部特征参数,但是其提取的特征参数还是局限于头长和头围,因此在进行头部号型分类时,没有全面考虑头面部的形状信息和曲面信息。At present, the domestic head shape classification is mainly based on GB/T 2428-1998 "Chinese Adult Head and Facial Dimensions" and GB/T 23461-2009 "Adult Male Head Shape Three-Dimensional Dimensions". GB/T 2428-1998 "Head and Facial Dimensions of Chinese Adults" is based on the measurement data of a small sample, the data measured in 1987-1988 is regressed, and the relationship between one-dimensional size data and a small amount of two-dimensional size data is provided, mainly focusing on two-dimensional size data. Dimensional graphic design application; GB/T 23461-2009 "Three-dimensional dimensions of adult male head shape" is based on the two-dimensional distribution of adult male head width and length and head height and length index, and only gives the three-dimensional size of Chinese adult male head shape. Use limitations. Chinese patent application CN102125323A discloses a method for making head shape of minors aged 4-12 based on the coverage rate of three-dimensional image feature parameters, which includes: a measuring step, scanning a number of minors aged 4-12 through a three-dimensional body scanner image, and then use 3D scanning software to extract feature parameters of the image; preprocessing step, preprocess the extracted feature parameters, that is, create and edit 4-12 minor human head size data files and detect and processing abnormal body size data, and remove the unqualified samples; in the step of size formulation, formulate a statistical table according to the characteristic parameters, and then calculate the coverage rate between the characteristic parameters according to the statistical table, and do not set the coverage rate less than 5‰ model, if it is greater than or equal to 5‰, set the model. Although this method extracts the head feature parameters of minors through three-dimensional scanning, the extracted feature parameters are still limited to the head length and head circumference, so the shape information of the head and face is not fully considered when classifying the head size. and surface information.
发明内容SUMMARY OF THE INVENTION
为解决以上技术问题,本发明提供了一种基于三维点云坐标的头面部号型分类方法,其全面考虑头面部的形状信息和曲面信息,实现对人体头部号型的分类。In order to solve the above technical problems, the present invention provides a head and face shape classification method based on three-dimensional point cloud coordinates, which comprehensively considers the shape information and surface information of the head and face, and realizes the classification of the head shape of the human body.
一种基于三维点云坐标的头面部号型分类方法,包括步骤:A head and face shape classification method based on three-dimensional point cloud coordinates, comprising the steps of:
步骤1:采集头面部三维点云数据;Step 1: Collect 3D point cloud data of head and face;
步骤2:定义关键参数,得到关键点半径;Step 2: Define key parameters and get the radius of key points;
步骤3:数据处理,得到最终的头面部数据模型;Step 3: Data processing to obtain the final head and face data model;
步骤4:根据最终的头面部数据模型,利用主成分分析完成头部号型分类。Step 4: According to the final head and face data model, use principal component analysis to complete the head size classification.
优选的是,步骤1中借助三维扫描设备对头面部进行扫描,获得头面部三维点云数据。Preferably, in step 1, the head and face are scanned by a three-dimensional scanning device to obtain three-dimensional point cloud data of the head and face.
上述任一方案优选的是,步骤2中包括子步骤:It is preferred that any of the above-mentioned schemes includes sub-steps in step 2:
步骤21:对人体头面部进行等间距横截面提取;Step 21: Extract the equidistant cross-sections of the human head and face;
步骤22:读取每一横截面的每一关键点坐标,得到每一关键点处的半径。Step 22: Read the coordinates of each key point of each cross section, and obtain the radius at each key point.
上述任一方案优选的是,步骤22中,针对每一头面部横截面,定义所述关键点的个数为N+1,N+1个关键点围绕横截面的中心点,位于横截面的边缘曲线上,第一个关键点与第N+1个关键点重叠,第i个关键点和该横截面的中心点的连线与第i+1个关键点和该横截面的中心点的连线的夹角为360°/N,i≥1且i≤N。Preferably in any of the above solutions, in step 22, for each cross-section of the head and face, the number of the key points is defined as N+1, and the N+1 key points surround the center point of the cross-section and are located at the edge of the cross-section. On the curve, the first keypoint overlaps with the N+1th keypoint, and the connection between the ith keypoint and the center point of the cross section and the connection between the i+1th keypoint and the center point of the cross section. The included angle of the lines is 360°/N, i≥1 and i≤N.
上述任一方案优选的是,根据步骤2中定义的关键参数,定义对扫描的点云数据进行分层的方式,进而通过匹配合适的数据 处理模板,进行步骤3。Preferably, according to any of the above-mentioned schemes, according to the key parameters defined in step 2, define a method for layering the scanned point cloud data, and then proceed to step 3 by matching a suitable data processing template.
上述任一方案优选的是,所述步骤3包括:Preferably any of the above-mentioned schemes, the step 3 comprises:
步骤31:对采集的头面部三维点云数据去噪;Step 31: Denoise the collected 3D point cloud data of the head and face;
步骤32:对去噪后的点云数据,进行数据模型各网格顶点坐标调整;Step 32: Adjust the coordinates of each grid vertex of the data model for the denoised point cloud data;
步骤33:补洞;Step 33: fill the hole;
步骤34:光顺。Step 34: Smoothing.
上述任一方案优选的是,所述步骤31包括基于距离计算方法确定第一类噪声点和第二类噪声点。Preferably in any of the above solutions, the step 31 includes determining the first type of noise point and the second type of noise point based on a distance calculation method.
上述任一方案优选的是,所述第一类噪声点为与人体头部真实数据点分布不一致、距离较远的点,确定第一类噪声点的方法为:在人体头部的点云数据集中,任意选定一点,寻找该点邻域内距离超出设定阈值的点,将这些阈值外的点设定为第一类噪声点;数据预处理时将第一类噪声点去除。Preferably in any of the above solutions, the first type of noise points are points that are inconsistent with the real data point distribution of the human head and are far away, and the method for determining the first type of noise points is: point cloud data on the human head Concentration, arbitrarily select a point, find the points in the neighborhood of the point whose distance exceeds the set threshold, and set the points outside these thresholds as the first type of noise points; the first type of noise points are removed during data preprocessing.
上述任一方案优选的是,所述第二类噪声点为部分数据重叠的点,确定及删除第二类噪声点的方法为:对于配准后的人体点云,使用Y-Z平面将头部分割为两部分,在点云边缘上寻找距离最小的两个点A 0、B 0,两点的z坐标分别用z A、z B表示,如果z A≥z B,将这两个点设置为新的边缘点;如果z A<z B,设A 1、B 1分别表示A 0、B 0的下一个边缘数据点,计算A 1与B 0之间的距离d 1和B 1与A 0之间的距离d 2,如果d 1≥d 2,则设定点A 0和B 1为新的边缘点,否则设定A 1和B 0为新边缘点;新边缘点确定后,将其与原边缘点之间的数据点设定为第二类噪声点,在数据预处理时将其删除。 Preferably in any of the above solutions, the second type of noise point is a point where some data overlap, and the method for determining and deleting the second type of noise point is: for the registered human body point cloud, use the YZ plane to segment the head. It is divided into two parts. Find two points A 0 and B 0 with the smallest distance on the edge of the point cloud. The z-coordinates of the two points are represented by z A and z B respectively. If z A ≥ z B , set these two points as A new edge point; if z A <z B , set A 1 and B 1 to represent the next edge data point of A 0 and B 0 respectively, and calculate the distance d 1 between A 1 and B 0 and between B 1 and A 0 The distance between d 2 , if d 1 ≥ d 2 , set points A 0 and B 1 as new edge points, otherwise set A 1 and B 0 as new edge points; after the new edge point is determined, set it The data points between the original edge points are set as the second type of noise points, which are removed during data preprocessing.
上述任一方案优选的是,步骤32包括步骤:It is preferred that any of the above-mentioned schemes, step 32 comprises the steps:
步骤321:根据公式
Figure PCTCN2021080982-appb-000001
计算出数据模型的几何中心点坐标,其中n为网格顶点总数,V i为网格模型顶点在三维空间中的位置坐标;
Step 321: According to the formula
Figure PCTCN2021080982-appb-000001
Calculate the geometric center point coordinates of the data model, where n is the total number of grid vertices, and V i is the position coordinates of the grid model vertices in the three-dimensional space;
步骤322:计算所述几何中心点到坐标原点的平移变换矩阵 M m,并分别得到x、y、z方向的最大宽度,进而计算旋转变换矩阵M r,所述旋转变换矩阵M r使得y方向具有最大宽度; Step 322: Calculate the translation transformation matrix M m from the geometric center point to the coordinate origin, and obtain the maximum widths in the x, y, and z directions respectively, and then calculate the rotation transformation matrix M r , the rotation transformation matrix M r makes the y direction has a maximum width;
步骤323:根据所述平移变换矩阵M m和所述旋转变换矩阵M r得到仿射变换矩阵M,M=M m×M rStep 323: Obtain an affine transformation matrix M according to the translation transformation matrix M m and the rotation transformation matrix M r , where M=M m ×M r ;
步骤324:根据公式V iN=V i×M(i=0,1,...,n),进行数据模型各网格顶点坐标调整,得到网格顶点坐标调整后的数据模型,V iN表示调整之后的顶点在三维空间中的位置坐标。 Step 324 : According to the formula V iN =V i ×M (i=0,1, . The position coordinates of the adjusted vertices in three-dimensional space.
上述任一方案优选的是,所述网格模型为三角网格模型。Preferably in any of the above solutions, the mesh model is a triangular mesh model.
上述任一方案优选的是,步骤33补洞为针对采集头面部三维点云数据过程中,头顶数据缺失的情况,对网格顶点坐标调整后的数据模型进行数据重新采样,对数据模型进行重建。Preferably, in any of the above-mentioned solutions, step 33 filling the hole is to re-sample the data model after the adjustment of the grid vertex coordinates for the situation that the top of the head data is missing in the process of collecting the three-dimensional point cloud data of the head and face, and reconstruct the data model. .
上述任一方案优选的是,步骤33具体包括:Preferably in any of the above-mentioned schemes, step 33 specifically includes:
步骤331:标记头顶数据缺失区域的长轴和短轴;Step 331: Mark the long axis and short axis of the missing area of the overhead data;
步骤332:定义一组法线方向与长轴平行的平面,该组平面与头顶曲面相交出一组平行的平面切片曲线;Step 332: define a group of planes whose normal direction is parallel to the long axis, and the group of planes intersects with the top surface of the head to form a group of parallel plane slice curves;
步骤333:每条平面切片曲线与扫描获得的头面部层状三维点云数据形成的扫面曲线相交得到两个交点,用这些交点作为重新拟合曲线的部分数据点,进行数据插值,得到完整的头面部数据模型。Step 333 : each plane slice curve intersects the scan curve formed by the layered 3D point cloud data of the head and face obtained by scanning to obtain two intersection points, and use these intersection points as part of the data points of the re-fitting curve, perform data interpolation, and obtain a complete The head and face data model.
上述任一方案优选的是,步骤34中对进行数据插值后形成的层状扫描曲线上每个顶点的位置进行调整,达到光顺的目的,进而得到最终的头面部数据模型。In any of the above solutions, preferably, in step 34, the position of each vertex on the layered scanning curve formed after data interpolation is adjusted to achieve smoothing, and then obtain the final head and face data model.
上述任一方案优选的是,所述步骤4包括:Preferably in any of the above-mentioned schemes, the step 4 includes:
步骤41:设最终的头面部数据模型中,样本点云数据的点集为P=(p 1,p 2,…,p n) T,其中p i=(x i,y i,z i) T,n为样本点的个数; Step 41: Assume that in the final head and face data model, the point set of the sample point cloud data is P=(p 1 ,p 2 ,...,p n ) T , where p i =(x i ,y i ,z i ) T , n is the number of sample points;
步骤42:选取样本点p i的三维坐标x i,y i,z i作为三个指标,记为X 1,X 2,X 3Step 42: Select the three-dimensional coordinates x i , y i , and zi of the sample point p i as three indicators, denoted as X 1 , X 2 , X 3 ;
步骤43:对目标点p t=(x t,y t,z t) T搜索其邻域点集P tn= (p 1,p 2,…,p k) T,k为邻域内点的个数,计算所有邻域点到目标点的距离d i及其均值
Figure PCTCN2021080982-appb-000002
Step 43: Search the target point pt = (x t , y t , z t ) T for its neighborhood point set P tn = (p 1 , p 2 ,..., p k ) T , where k is the number of points in the neighborhood number, calculate the distance d i from all neighbor points to the target point and its mean
Figure PCTCN2021080982-appb-000002
步骤44:寻求3个指标X 1,X 2,X 3的线性组合 Step 44: Seek the linear combination of 3 indicators X 1 , X 2 , X 3
Figure PCTCN2021080982-appb-000003
Figure PCTCN2021080982-appb-000003
使满足条件
Figure PCTCN2021080982-appb-000004
to satisfy the condition
Figure PCTCN2021080982-appb-000004
进而得到矩阵
Figure PCTCN2021080982-appb-000005
其中i=1,2,3。
to get the matrix
Figure PCTCN2021080982-appb-000005
where i=1,2,3.
步骤45:计算矩阵C 3×3的三个特征向量,从小到大依次表示为λ 0、λ 1、λ 2,进而得到目标点的局部特征描述子
Figure PCTCN2021080982-appb-000006
Step 45: Calculate the three eigenvectors of the matrix C 3×3 , which are expressed as λ 0 , λ 1 , and λ 2 in order from small to large, and then obtain the local feature descriptor of the target point
Figure PCTCN2021080982-appb-000006
步骤46:根据主成分分析结果建立主成分分析面板,完成号型分类。Step 46: Establish a principal component analysis panel according to the principal component analysis result, and complete the size classification.
本发明的基于三维点云坐标的头面部号型分类方法根据三维扫描获得的点云数据,全面考虑了头面部的形状信息和曲面信息;通过选择数据处理模板完成数据处理过程,具有数据修复和降噪效果,提高分析效率;同时创造性地针对点云坐标进行主成分分析,提高了头部号型分类的准确度。The head and face shape classification method based on three-dimensional point cloud coordinates of the present invention comprehensively considers the shape information and surface information of the head and face according to the point cloud data obtained by three-dimensional scanning; The noise reduction effect improves the analysis efficiency; at the same time, the principal component analysis is creatively carried out for the point cloud coordinates, which improves the accuracy of the head size classification.
附图说明Description of drawings
图1为按照本发明的基于三维点云坐标的头面部号型分类方法的一优选实施例的流程示意图。FIG. 1 is a schematic flowchart of a preferred embodiment of a method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
图2为按照本发明的基于三维点云坐标的头面部号型分类方法的如图1所示实施例针对某一横截面定义的关键点的示意图。FIG. 2 is a schematic diagram of key points defined for a certain cross section in the embodiment shown in FIG. 1 according to the three-dimensional point cloud coordinate-based head and face shape classification method of the present invention.
图3为按照本发明的基于三维点云坐标的头面部号型分类方法的如图1所示实施例头顶缺失数据补洞示意图。3 is a schematic diagram of filling holes for missing data on the top of the head of the embodiment shown in FIG. 1 according to the method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
图4为按照本发明的基于三维点云坐标的头面部号型分类方法的如图1所示实施例进行补洞后获得的完整的头面部数据 模型的示意图。Fig. 4 is a schematic diagram of a complete head and face data model obtained after filling holes according to the embodiment as shown in Fig. 1 of the head and face shape classification method based on three-dimensional point cloud coordinates of the present invention.
图5为按照本发明的基于三维点云坐标的头面部号型分类方法的如图1所示实施例的光顺方法示意图。5 is a schematic diagram of the smoothing method of the embodiment shown in FIG. 1 according to the method for classifying head and face shapes based on three-dimensional point cloud coordinates according to the present invention.
图6为按照本发明的基于三维点云坐标的头面部号型分类方法的如图1所示实施例的头部号型分类示意图。FIG. 6 is a schematic diagram of the head shape classification of the embodiment shown in FIG. 1 according to the three-dimensional point cloud coordinate-based head and face shape classification method of the present invention.
具体实施方式Detailed ways
为了更好地理解本发明,下面结合具体实施例对本发明作详细说明。In order to better understand the present invention, the present invention will be described in detail below with reference to specific embodiments.
实施例1Example 1
如图1所示,一种基于三维点云坐标的头面部号型分类方法,包括步骤:As shown in Figure 1, a method for classifying head and face shape based on three-dimensional point cloud coordinates includes the steps:
步骤1:采集头面部三维点云数据;Step 1: Collect 3D point cloud data of head and face;
步骤2:定义关键参数,得到关键点半径;Step 2: Define key parameters and get the radius of key points;
步骤3:数据处理,得到最终的头面部数据模型;Step 3: Data processing to obtain the final head and face data model;
步骤4:根据最终的头面部数据模型,利用主成分分析完成头部号型分类。Step 4: According to the final head and face data model, use principal component analysis to complete the head size classification.
步骤1中借助三维扫描设备对头面部进行扫描,获得头面部三维点云数据。In step 1, the head and face are scanned with a three-dimensional scanning device to obtain three-dimensional point cloud data of the head and face.
在本实施例中,整个人头的获取的点云数量介于38420~56200个之间,其中上部人头(两耳以上)的点云数量介于15223~24221个之间,人脸(定义为人头朝向前方的π/2范围)的点云数量介于2809~3977个之间。In this embodiment, the number of point clouds obtained for the entire head is between 38,420 and 56,200, and the number of point clouds on the upper head (more than two ears) is between 15,223 and 24,221. The number of point clouds in the forward π/2 range) is between 2809 and 3977.
步骤2中包括子步骤: Step 2 includes sub-steps:
步骤21:对人体头面部进行等间距横截面提取;Step 21: Extract the equidistant cross-sections of the human head and face;
步骤22:读取每一横截面的每一关键点坐标,得到每一关键点处的半径。步骤22中,针对每一头面部横截面,定义所述关键点的个数为N+1,N+1个关键点围绕横截面的中心点,位于横截面的边缘曲线上,第一个关键点与第N+1个关键点重叠,第i个关键点和该横截面的中心点的连线与第i+1个关键点和该 横截面的中心点的连线的夹角为360°/N,i≥1且i≤N。Step 22: Read the coordinates of each key point of each cross section, and obtain the radius at each key point. In step 22, for each cross-section of the head and face, the number of the key points is defined as N+1, and the N+1 key points surround the center point of the cross-section and are located on the edge curve of the cross-section. The first key point Overlapping with the N+1th key point, the angle between the line connecting the i-th key point and the center point of the cross section and the line connecting the i+1-th key point and the center point of the cross section is 360°/ N, i≥1 and i≤N.
人体头部是一个凹凸不平的曲面体,如果对头部进行等间距的横截面提取,不同部位的横截面存在周长不同、中心点不同、弧线曲率不同等特点,但是相邻的横截面存在较强的相似性和连贯性,尤其在横截面的弧线形状上。通过定义关键点,通过关键点即可表征人体头部横截面的特征,依据关键点可以将头部划分为多个周线层。The human head is an uneven surface body. If the cross-sections of the head are extracted at equal intervals, the cross-sections of different parts have the characteristics of different perimeters, different center points, and different arc curvatures, but the adjacent cross-sections have different characteristics. There is a strong similarity and coherence, especially in the arc shape of the cross section. By defining key points, the characteristics of the cross-section of the human head can be represented by the key points, and the head can be divided into multiple contour layers according to the key points.
在本实施例中,对头部进行间隔2mm的等间距截面提取,如图2所示,将横截面的中心点定义为平面坐标轴的原点,定义61个关键点来表征人体头部横截面的特征,61个关键点在四个象限中依次连接,首尾重叠,即关键点1和关键点61为同一点,顺时针方向递增;关键点1(61)、16、31、46分别与平面坐标轴的X、Y轴相交,关键点1与X轴相交于X轴的正方向,沿顺时针方向,关键点7于X轴成36°夹角,依次类推,关键点16与Y轴相交于Y轴的负向,关键点31与X轴相交于X轴的负方向,且与关键点1关于Y轴左右对称,关键点46与Y轴的正向相交,关键点61与X轴正向成360°夹角,与关键点1重叠;关键点1和关键点31为人体头部横向截面的侧中心点,关键点16和关键点46为人体头部横向截面的前后中心点;将61个关键点与原点相连,横截面的周线被分成60段,每段弧长所对应的弧心角约为6°,该弧长的弧边分别为相邻两关键点到原点的距离。In this embodiment, the head is extracted with equal intervals of 2mm. As shown in Figure 2, the center point of the cross-section is defined as the origin of the plane coordinate axis, and 61 key points are defined to represent the cross-section of the human head. features, 61 key points are connected in turn in four quadrants, overlapping end to end, that is, key point 1 and key point 61 are the same point, increasing clockwise; key points 1 (61), 16, 31, 46 are respectively associated with the plane The X and Y axes of the coordinate axes intersect, and the key point 1 and the X axis intersect in the positive direction of the X axis. In the clockwise direction, the key point 7 forms an angle of 36° with the X axis, and so on, and the key point 16 intersects the Y axis. In the negative direction of the Y axis, the key point 31 and the X axis intersect in the negative direction of the X axis, and are symmetrical with the key point 1 about the Y axis, the key point 46 intersects with the positive direction of the Y axis, and the key point 61 and the positive X axis. At an angle of 360°, it overlaps with the key point 1; the key point 1 and the key point 31 are the lateral center points of the lateral section of the human head, and the key point 16 and the key point 46 are the front and rear center points of the lateral section of the human head; 61 key points are connected to the origin, the contour of the cross section is divided into 60 segments, the arc center angle corresponding to the arc length of each segment is about 6°, and the arc sides of the arc length are the distances from two adjacent key points to the origin. .
根据步骤2中定义的关键参数,定义对扫描的点云数据进行分层的方式,进而通过匹配合适的数据处理模板,进行步骤3。According to the key parameters defined in step 2, define the method of layering the scanned point cloud data, and then proceed to step 3 by matching the appropriate data processing template.
经过非接触式测量得到的人体头部初始点云数据散乱分布、有孔洞、有噪音点且不光顺,因此需要对点云数据进行处理。The initial point cloud data of the human head obtained by non-contact measurement is scattered, has holes, noise points and is not smooth, so it is necessary to process the point cloud data.
所述步骤3包括:The step 3 includes:
步骤31:对采集的头面部三维点云数据去噪;Step 31: Denoise the collected 3D point cloud data of the head and face;
步骤32:对去噪后的点云数据,进行数据模型各网格顶点坐标调整;Step 32: Adjust the coordinates of each grid vertex of the data model for the denoised point cloud data;
步骤33:补洞;Step 33: fill the hole;
步骤34:光顺。Step 34: Smoothing.
受扫描环境和系统硬件设备的影响,在测量过程中不可避免产生噪声,把与人体实际数据点相隔较远的点称为第一类噪声点,把部分数据重叠的点称为第二类噪声点。对于第一类和第二类噪声点,采用选点删除法,即判断人体数据是否为噪声点,如果是则直接删除。Affected by the scanning environment and system hardware equipment, noise is inevitably generated during the measurement process. The points far away from the actual data points of the human body are called the first type of noise points, and the points where part of the data overlaps are called the second type of noise. point. For the first and second types of noise points, the selection deletion method is adopted, that is, it is judged whether the human body data is a noise point, and if so, it is directly deleted.
所述步骤31中基于距离计算方法确定第一类噪声点和第二类噪声点。确定第一类噪声点的方法为:在某一点的邻域内寻找与该点最近的点并计算该两点之间的距离,若所述两点之间的距离大于设定的阈值,则该点为第一类噪声点;删除该点的点云数据。确定及删除第二类噪声点的方法为:所述第二类噪声点为点云中部分数据重叠的点。对于配准后的人体头部点云数据,使用Y-Z平面将头部分割为两部分,在点云边缘上寻找距离最小的两个点A 0、B 0,两点的z坐标分别用z A、z B表示。如果z A≥z B,将这两个点设置为新的边缘点;如果z A<z B,设A 1、B 1分别表示A 0、B 0的下一个边缘数据点,计算A 1与B 0之间的距离d 1和B 1与A 0之间的距离d 2,如果d 1≥d 2,则设定点A 0和B 1为新的边缘点,否则设定A 1和B 0为新边缘点;新边缘点确定后,将其与原边缘点之间的数据点设定为第二类噪声点,在数据预处理时将其删除。在本实施例中,所述阈值可以设置为激光扫描时的理论间距值,即数据采集过程中,激光扫描人体头部时的层间距。由于扫描获得的多个头部点云数据之间的坐标系不完全一致,因此必须将所有数据转换到同一坐标系下,这就需要对人体头部点云进行数据配准;在配准时,对于每个样本而言,z轴方向从原点指向头顶,y轴方向则从原点指向鼻尖,x轴方向与z轴、y轴的叉积方向一致,定义连接样本形心和鼻尖点的向量,以此向量为基准,把所有人头样本绕z轴旋转使其鼻尖朝向趋于一致。所述下一个头部数据点是指头部前、后面相对应的扫描线边缘上距离次小的点。 In the step 31, the first type of noise point and the second type of noise point are determined based on the distance calculation method. The method of determining the first type of noise point is: find the point closest to the point in the neighborhood of a point and calculate the distance between the two points, if the distance between the two points is greater than the set threshold, then the The point is the first type of noise point; delete the point cloud data of this point. The method for determining and deleting the second type of noise points is as follows: the second type of noise points are points in the point cloud where part of the data overlaps. For the registered human head point cloud data, use the YZ plane to divide the head into two parts, find two points A 0 and B 0 with the smallest distance on the edge of the point cloud, and use z A for the z coordinates of the two points. , z B said. If z A ≥ z B , set these two points as new edge points; if z A <z B , set A 1 and B 1 to represent the next edge data points of A 0 and B 0 respectively, and calculate A 1 and B 0 The distance d 1 between B 0 and the distance d 2 between B 1 and A 0 , if d 1 ≥ d 2 , set points A 0 and B 1 as new edge points, otherwise set points A 1 and B 0 is a new edge point; after the new edge point is determined, the data point between it and the original edge point is set as the second type of noise point, and it is deleted during data preprocessing. In this embodiment, the threshold value may be set as the theoretical spacing value during laser scanning, that is, the interlayer spacing during laser scanning of the human head during the data collection process. Since the coordinate systems of the multiple head point cloud data obtained by scanning are not completely consistent, all data must be converted to the same coordinate system, which requires data registration of the human head point cloud; during registration, For each sample, the z-axis direction is from the origin to the top of the head, the y-axis direction is from the origin to the tip of the nose, the direction of the x-axis is consistent with the direction of the cross product of the z-axis and the y-axis, and the vector connecting the sample centroid and the tip of the nose is defined, Based on this vector, all head samples are rotated around the z-axis so that the nose tip is aligned in the same direction. The next head data point refers to the point with the next smallest distance on the edge of the scan line corresponding to the front and back of the head.
扫描获取的三维点云数据初始读入时以任意位置,任意方向排列,此时的数据模型并不能正确地显示在系统中,并且会给今后的处理过程带来许多不便。因此,对扫描获取的数据模型进行去噪后,对模型中各网格顶点进行坐标调整,进而模板匹配。The 3D point cloud data obtained by scanning is initially read in at any position and in any direction. The data model at this time cannot be displayed in the system correctly, and it will bring a lot of inconvenience to the future processing process. Therefore, after denoising the data model obtained by scanning, the coordinates of each mesh vertex in the model are adjusted, and then the template is matched.
步骤32包括步骤:Step 32 includes the steps:
步骤321:根据公式
Figure PCTCN2021080982-appb-000007
计算出数据模型的几何中心点坐标,其中n为网格顶点总数,V i为网格模型顶点在三维空间中的位置坐标;
Step 321: According to the formula
Figure PCTCN2021080982-appb-000007
Calculate the geometric center point coordinates of the data model, where n is the total number of grid vertices, and V i is the position coordinates of the grid model vertices in the three-dimensional space;
步骤322:计算所述几何中心点到坐标原点的平移变换矩阵M m,并分别得到x、y、z方向的最大宽度,进而计算旋转变换矩阵M r,所述旋转变换矩阵M r使得y方向具有最大宽度; Step 322: Calculate the translation transformation matrix M m from the geometric center point to the coordinate origin, and obtain the maximum widths in the x, y, and z directions respectively, and then calculate the rotation transformation matrix M r , the rotation transformation matrix M r makes the y direction has a maximum width;
步骤323:根据所述平移变换矩阵M m和所述旋转变换矩阵M r得到仿射变换矩阵M,M=M m×M rStep 323: Obtain an affine transformation matrix M according to the translation transformation matrix M m and the rotation transformation matrix M r , where M=M m ×M r ;
步骤324:根据公式V iN=V i×M(i=0,1,...,n),进行数据模型各网格顶点坐标调整,得到网格顶点坐标调整后的数据模型,V iN表示调整之后的顶点在三维空间中的位置坐标。 Step 324 : According to the formula V iN =V i ×M (i=0,1, . The position coordinates of the adjusted vertices in three-dimensional space.
在本实施例中,定义x轴正方向为数据模型的左侧,负方向为数据模型的右侧。所用网格模型为三角网格模型。In this embodiment, the positive direction of the x-axis is defined as the left side of the data model, and the negative direction is defined as the right side of the data model. The mesh model used is a triangular mesh model.
定义三角网格模型为一个二元组M=(K,V),其中V={v 0,v 1,...,v m},V i∈R 3表示网格模型顶点在三维空间中的位置坐标;K为表征网格拓扑结构的单纯复形,每个单纯复形包含一组单纯形{v 0},{v 0,v 1},{v 0,v 1,v 2},分别为定义在R 3上的网格顶点、连接边及三角面片。 Define the triangular mesh model as a two-tuple M=(K, V), where V={v 0 , v 1 ,..., v m }, V i ∈ R 3 represents the mesh model vertex in three-dimensional space The position coordinates of ; K is a simplicial complex representing the mesh topology, each simplicial complex contains a set of simplex {v 0 }, {v 0 , v 1 }, {v 0 , v 1 , v 2 }, They are mesh vertices, connecting edges and triangular patches defined on R3 , respectively.
三角网格模型由几何信息和拓扑信息两部分组成,表达形体的基本拓扑实体包括:The triangular mesh model consists of two parts: geometric information and topological information. The basic topological entities that express the shape include:
1)顶点(Vertex)。顶点的位置用三维(几何)点来表示。点是构成三角网格模型最基本的元素,其他元素都是由点直接或 间接构成。1) Vertex. The positions of vertices are represented by three-dimensional (geometric) points. Points are the most basic elements of a triangular mesh model, and other elements are directly or indirectly formed by points.
2)边(Edge)。边是两个邻面的交集,边的方向由起始顶点指向终止顶点。2) Edge (Edge). An edge is the intersection of two adjacent faces, and the direction of the edge is from the start vertex to the end vertex.
3)环(Loop)。环是有序、有向边组成的封闭边界。环中每一条边的起点与前一边的终点重合,而终点与后一条边的起点重合,构成各边同向的封闭环。环有方向内外之分,内环中各边按顺时针方向连接,外环中各边按逆时针方向连接。3) Loop. A ring is a closed boundary composed of ordered, directed edges. The start point of each edge in the ring coincides with the end point of the previous side, and the end point coincides with the start point of the latter side, forming a closed loop with all sides in the same direction. The ring is divided into inner and outer directions. The edges in the inner ring are connected in a clockwise direction, and the edges in the outer ring are connected in a counterclockwise direction.
4)面(Face)。面是由封闭环围成的三角片区域。面也具有方向性,面的方向由边界环确定,外环边界包围的面法向量向外,称为正向面;内环边界包围的面法向量向内,称为反向面。在三角网格模型中,模型表面各面片为正向面,内部面片为反向面。4) Face. A polygon is a triangular area enclosed by a closed loop. The surface also has directionality. The direction of the surface is determined by the boundary ring. The normal vector of the surface enclosed by the outer ring boundary is outward, which is called the forward surface; the normal vector of the surface enclosed by the inner ring boundary is inward, which is called the reverse surface. In the triangular mesh model, each face of the model surface is the forward face, and the inner face is the reverse face.
三角网格模型各拓扑实体间满足下面的关系:The topological entities of the triangular mesh model satisfy the following relationships:
1)三角片只能在其边上与其它三角片相交;1) A triangle can only intersect with other triangles on its sides;
2)每条内部边只能有两个相邻三角片;2) Each inner edge can only have two adjacent triangles;
3)边界边只能有一个相邻三角片。3) A boundary edge can only have one adjacent triangle.
三维扫描得到的是层状数据点,不能完全覆盖到整个人头,因此需要对网格顶点坐标调整后的数据模型进行数据的重新采样,对人头模型进行重建,即执行步骤33补洞。The 3D scanning obtains layered data points, which cannot completely cover the entire human head. Therefore, it is necessary to resample the data model after the grid vertex coordinates are adjusted to reconstruct the human head model, that is, perform step 33 to fill the hole.
如图3所示,图3(a)为进行网格顶点坐标调整后的数据模型头顶部位示意图,可以看出从头顶方向看,缺失数据围成了一个近似椭圆的空白区域。As shown in Figure 3, Figure 3(a) is a schematic diagram of the top position of the data model after the grid vertex coordinates are adjusted. It can be seen that from the top of the head, the missing data is surrounded by a blank area that is approximately an ellipse.
步骤33中首先执行:First execute in step 33:
步骤331:标记头顶数据缺失区域的长轴和短轴,如图3(a)所示;然后执行:Step 331: Mark the long axis and short axis of the missing area of the overhead data, as shown in Figure 3(a); then execute:
步骤332:定义一组法线方向与长轴平行的平面,该组平面与头顶曲面相交出一组平行的平面切片曲线,如图3(b)所示;最后执行:Step 332: Define a set of planes whose normal direction is parallel to the long axis, and this set of planes intersects with the top surface of the head to form a set of parallel plane slice curves, as shown in Figure 3(b); finally execute:
步骤333:每条平面切片曲线与扫描获得的头面部层状三维点云数据形成的扫面曲线相交得到两个交点,用这些交点作为重 新拟合曲线的部分数据点,进行数据插值,得到完整的头面部数据模型。Step 333 : each plane slice curve intersects the scan curve formed by the layered 3D point cloud data of the head and face obtained by scanning to obtain two intersection points, and use these intersection points as part of the data points of the re-fitting curve, perform data interpolation, and obtain a complete The head and face data model.
在本实施例中,借助三次B样条曲线和插值曲线蒙面法执行步骤333。In this embodiment, step 333 is performed by means of a cubic B-spline curve and an interpolation curve masking method.
每条切片曲线与原始层状数据点形成的扫描曲线相交得到两个交点,用这些交点作为重新拟合曲线的部分数据点,进行数据插值。在同一条平面切片曲线的两个交点之间插值m+1个数据点,插值的数据点表示为Q 0、Q 1、…、Q m。则m+1个插值数据点Q i(i=0,1,…,m)的三次B样条曲线方程可写为: Each slice curve intersects the scan curve formed by the original layered data points to obtain two intersection points, which are used as part of the data points of the re-fitted curve to perform data interpolation. Interpolate m+1 data points between two intersection points of the same plane slice curve, and the interpolated data points are denoted as Q 0 , Q 1 , . . . , Q m . Then the cubic B-spline curve equation of m+1 interpolated data points Qi ( i =0, 1, ..., m) can be written as:
Figure PCTCN2021080982-appb-000008
Figure PCTCN2021080982-appb-000008
其中D j是曲线的控制顶点,B j,3(u)是三次曲线定义在节点矢量U=[u 0,u 1,…,u n+4]上的B样条基函数。根据端点插值与曲线定义域要求,采用定义域两端节点为4重的重节点端点条件,即:u 0=u 1=u 2=u 3=0,u n+1=u n+2=u n+3=u n+4=1。对数据点Q i(i=0,1,…,m)取规范累积弦长参数化得参数值序列
Figure PCTCN2021080982-appb-000009
where D j is the control vertex of the curve, and B j,3 (u) is the B-spline basis function of the cubic curve defined on the nodal vector U=[u 0 , u 1 , . . . , u n+4 ]. According to the requirements of endpoint interpolation and curve definition domain, the endpoint condition of multiple nodes with four nodes at both ends of the domain is adopted, namely: u 0 =u 1 =u 2 =u 3 =0, u n+1 =u n+2 = u n+3 =un +4 =1. For the data point Q i (i=0, 1, .
Figure PCTCN2021080982-appb-000009
Figure PCTCN2021080982-appb-000010
Figure PCTCN2021080982-appb-000010
相应得到定义域内节点值为
Figure PCTCN2021080982-appb-000011
首点P 0(0)=Q 0=D 0,末点P m(1)=Q m=D n+1,将曲线定义域
Figure PCTCN2021080982-appb-000012
内的节点值依次代入方程,应满足插值条件,即:
Correspondingly, the node value in the definition domain is obtained
Figure PCTCN2021080982-appb-000011
The first point P 0 (0)=Q 0 =D 0 , the end point P m (1)=Q m =D n+1 , define the domain of the curve
Figure PCTCN2021080982-appb-000012
The node values inside the equation are substituted into the equation in turn, and the interpolation conditions should be satisfied, that is:
Figure PCTCN2021080982-appb-000013
Figure PCTCN2021080982-appb-000013
上式共含m+1=n-1个方程,不足以决定其中包含n+1个未知控制顶点,还要增加两个由边界条件给定的附加方程。为了保证插值后曲线与原曲线C1连续,首末端点与原曲线相同,这样可以得到插值曲线的端点条件:The above formula contains a total of m+1=n-1 equations, which is not enough to determine that it contains n+1 unknown control vertices, and two additional equations given by boundary conditions are added. In order to ensure that the curve after interpolation is continuous with the original curve C1, the first and last points of the curve are the same as the original curve, so that the endpoint conditions of the interpolation curve can be obtained:
P′ 0(0)=3(D 1-D 0)=3D 1-3Q 0 P' 0 (0)=3(D 1 -D 0 )=3D 1 -3Q 0
P′ m(1)=3(D n+1-D n)=3Q m-3D n P'm (1)=3( Dn +1 -Dn )=3Qm- 3Dn
对于准均匀B样条曲线,由于其具有固定的基函数系数矩阵,因此插值计算中的系数矩阵也是常数。对于准均匀三次B样条曲线式可改写成如下矩阵形式:For a quasi-uniform B-spline curve, since it has a fixed basis function coefficient matrix, the coefficient matrix in the interpolation calculation is also constant. For the quasi-uniform cubic B-spline curve, it can be rewritten as the following matrix form:
Figure PCTCN2021080982-appb-000014
Figure PCTCN2021080982-appb-000014
解线性方程组,即可求出全部未知控制顶点。至此,方程方程组所表示的准均匀三次B样条曲线就完全确定。Solving the system of linear equations can find all the unknown control vertices. So far, the quasi-uniform cubic B-spline curve represented by the equation system is completely determined.
用插值曲线蒙面法生成的曲面数据补上人头模型的顶部所缺数据,得到完整的头面部数据模型,如图4所示。The surface data generated by the interpolation curve masking method is used to supplement the missing data at the top of the human head model to obtain a complete head and face data model, as shown in Figure 4.
步骤34中对进行数据插值后形成的层状扫描曲线上每个顶点的位置进行调整,达到光顺的目的,进而得到最终的头面部数据模型。In step 34, the position of each vertex on the layered scanning curve formed by the data interpolation is adjusted to achieve the purpose of smoothing, and then the final head and face data model is obtained.
在本实施例中,采用Laplacian光顺法进行曲线的光顺,原理如图5所示,该算法根据每个顶点周围的几何信息,调整该点的位置来达到光顺的目的,进而得到最终的头面部数据模型。Laplacian光顺法算法如下:In this embodiment, the Laplacian smoothing method is used to smooth the curve. The principle is shown in Figure 5. The algorithm adjusts the position of each vertex according to the geometric information around the vertex to achieve the purpose of smoothing, and then obtains the final result. The head and face data model. The Laplacian smoothing algorithm is as follows:
Figure PCTCN2021080982-appb-000015
Figure PCTCN2021080982-appb-000015
最终的头面部数据模型的点云数据量是非常巨大的,为了找出点云数据中最主要的成分和结构,去除冗余数据对号型划分的影响,对最终的头面部数据模型点云数据进行主成分分析,以进行数据降维,揭示数据的简单结构。The amount of point cloud data in the final head and face data model is very large. In order to find out the most important components and structures in the point cloud data, remove the influence of redundant data on the size division, the final head and face data model point cloud The data is subjected to principal component analysis to perform data dimensionality reduction and reveal the simple structure of the data.
步骤4包括: Step 4 includes:
步骤41:设最终的头面部数据模型中,样本点云数据的点集为P=(p 1,p 2,…,p n) T,其中p i=(x i,y i,z i) T,n为样本点的个数; Step 41: Assume that in the final head and face data model, the point set of the sample point cloud data is P=(p 1 ,p 2 ,...,p n ) T , where p i =(x i ,y i ,z i ) T , n is the number of sample points;
步骤42:选取样本点p i的三维坐标x i,y i,z i作为三个指标,记为X 1,X 2,X 3Step 42: Select the three-dimensional coordinates x i , y i , and zi of the sample point p i as three indicators, denoted as X 1 , X 2 , X 3 ;
步骤43:对目标点p t=(x t,y t,z t) T搜索其邻域点集P tn=(p 1,p 2,…,p k) T,k为邻域内点的个数,计算所有邻域点到目标点的距离d i及其均值
Figure PCTCN2021080982-appb-000016
Step 43: Search the target point p t =(x t , y t , z t ) T for its neighborhood point set P tn =(p 1 , p 2 ,...,p k ) T , where k is the number of points in the neighborhood number, calculate the distance d i from all neighbor points to the target point and its mean
Figure PCTCN2021080982-appb-000016
步骤44:寻求3个指标X 1,X 2,X 3的线性组合 Step 44: Seek the linear combination of 3 indicators X 1 , X 2 , X 3
Figure PCTCN2021080982-appb-000017
Figure PCTCN2021080982-appb-000017
使满足条件
Figure PCTCN2021080982-appb-000018
to satisfy the condition
Figure PCTCN2021080982-appb-000018
进而得到矩阵
Figure PCTCN2021080982-appb-000019
其中i=1,2,3。
to get the matrix
Figure PCTCN2021080982-appb-000019
where i=1,2,3.
步骤45:计算矩阵C 3×3的三个特征向量,从小到大依次表示为λ 0、λ 1、λ 2,进而得到目标点的局部特征描述子
Figure PCTCN2021080982-appb-000020
Step 45: Calculate the three eigenvectors of the matrix C 3×3 , which are expressed as λ 0 , λ 1 , and λ 2 in order from small to large, and then obtain the local feature descriptor of the target point
Figure PCTCN2021080982-appb-000020
步骤46:根据主成分分析结果建立主成分分析面板,完成号型分类。Step 46: Establish a principal component analysis panel according to the principal component analysis result, and complete the size classification.
步骤45中,以LFD为一个点的局部特征描述量纲,LFD值越小,表示该点所在的局部区域在三维空间的一个观测方向上变化越小;LFD越大,表示该点所属局部区域形态特征变化越明显。通过对所有被试者的头面部主要变化模式进行主成分分析,结果表明人类头面部的空间形状是由少量的基向量构成。In step 45, the LFD is used as the local feature description dimension of a point. The smaller the LFD value, the smaller the change in the local area where the point is located in an observation direction of the three-dimensional space; the larger the LFD, the smaller the local area to which the point belongs. The more obvious the change of morphological characteristics. Through the principal component analysis of the main change patterns of the head and face of all subjects, the results show that the spatial shape of the human head and face is composed of a small number of basis vectors.
在本实施例中,步骤46中的主成分分析结果被用来建立如图6所示的主成分分析面板,其中包含1个概率椭圆,两条线将概率椭圆分成四个单元。基于号型分类建立小、短/宽、长/窄和大共四种型号的头部模型,其中小、长/窄、短/宽和大分别代表位于单元1、2、3和4中的人群样本。根据样本分析结果统计确定,划分1、2和3、4单元的线的斜率约为0.40767。In this embodiment, the principal component analysis result in step 46 is used to build a principal component analysis panel as shown in FIG. 6 , which contains one probability ellipse, and two lines divide the probability ellipse into four units. Based on the size classification, four types of head models are established: small, short/wide, long/narrow and large, where small, long/narrow, short/wide and large represent the head models located in units 1, 2, 3 and 4, respectively. population sample. According to the statistical determination of the sample analysis results, the slope of the line dividing 1, 2 and 3, 4 units is about 0.40767.
实施例2Example 2
根据主成分分析结果,可以建立更精细的主成分分析面板,以对号型进行更精细的分析,如将头部号型划分为8中号型甚至更多中号型;也可以对面部形状进行号型分类。分类的结果,可以为头面部产品设计提供依据,如为头盔、面膜、护目镜、防护面罩等产品设计型号提供依据。According to the results of the principal component analysis, a more refined principal component analysis panel can be established to conduct a more detailed analysis of the size, such as dividing the head size into 8 medium or even more medium; Sort by size. The results of the classification can provide a basis for the design of head and face products, such as helmets, masks, goggles, protective masks and other product design models.
需要说明的是,以上实施例仅用于说明本发明的技术方案,而非对其限制;尽管前述实施例对本发明进行了详细的说明,本领域的技术人员应该理解:其可以对前述实施例记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换,而这些替换,并不使相应技术方案的本质脱离本发明技术方案的范围。It should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the foregoing embodiments describe the present invention in detail, those skilled in the art should understand: The recorded technical solutions are modified, or some or all of the technical features thereof are equivalently replaced, and these replacements do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the present invention.

Claims (10)

  1. 一种基于三维点云坐标的头面部号型分类方法,包括步骤:A head and face shape classification method based on three-dimensional point cloud coordinates, comprising the steps of:
    步骤1:采集头面部三维点云数据;Step 1: Collect 3D point cloud data of head and face;
    步骤2:定义关键参数,得到关键点半径;Step 2: Define key parameters and get the radius of key points;
    步骤3:数据处理,得到最终的头面部数据模型;Step 3: Data processing to obtain the final head and face data model;
    步骤4:根据最终的头面部数据模型,利用主成分分析完成头部号型分类;其特征在于:所述步骤3包括:Step 4: According to the final head and face data model, use principal component analysis to complete the head size classification; it is characterized in that: the step 3 includes:
    步骤31:对采集的头面部三维点云数据去噪;Step 31: Denoise the collected 3D point cloud data of the head and face;
    步骤32:对去噪后的点云数据,进行数据模型各网格顶点坐标调整;Step 32: Adjust the coordinates of each grid vertex of the data model for the denoised point cloud data;
    步骤33:补洞;Step 33: fill the hole;
    步骤34:光顺。Step 34: Smoothing.
  2. 如权利要求1所述的基于三维点云坐标的头面部号型分类方法,其特征在于:步骤2中包括子步骤:The head and face shape classification method based on three-dimensional point cloud coordinates as claimed in claim 1, is characterized in that: in step 2, comprises sub-steps:
    步骤21:对人体头面部进行等间距横截面提取;Step 21: Extract the equidistant cross-sections of the human head and face;
    步骤22:读取每一横截面的每一关键点坐标,得到每一关键点处的半径;步骤22中,针对每一头面部横截面,定义所述关键点的个数为N+1,N+1个关键点围绕横截面的中心点,位于横截面的边缘曲线上,第一个关键点与第N+1个关键点重叠,第i个关键点和该横截面的中心点的连线与第i+1个关键点和该横截面的中心点的连线的夹角为360°/N,i≥1且i≤N。Step 22: Read the coordinates of each key point of each cross section, and obtain the radius at each key point; in step 22, for each head and face cross section, define the number of the key points as N+1, N +1 keypoints around the center point of the cross section, on the edge curve of the cross section, the first keypoint overlaps with the N+1th keypoint, the line connecting the ith keypoint and the center point of the cross section The angle between the line connecting the i+1th key point and the center point of the cross section is 360°/N, i≥1 and i≤N.
  3. 如权利要求2所述的基于三维点云坐标的头面部号型分类方法,其特征在于:根据步骤2中定义的关键参数,定义对扫描的点云数据进行分层的方式,进而通过匹配合适的数据处理模板,进行步骤3。The method for classifying head and face shapes based on three-dimensional point cloud coordinates as claimed in claim 2, characterized in that: according to the key parameters defined in step 2, a method for layering the scanned point cloud data is defined, and then by matching the appropriate data processing template, go to step 3.
  4. 如权利要求1所述的基于三维点云坐标的头面部号型分类方法,其特征在于:所述步骤31包括基于距离计算方法确定第一类噪声点和第二类噪声点。The method for classifying head and face shape based on three-dimensional point cloud coordinates according to claim 1, wherein the step 31 includes determining the first type of noise points and the second type of noise points based on a distance calculation method.
  5. 如权利要求4所述的基于三维点云坐标的头面部号型分类方法,其特征在于:所述第一类噪声点为与人体头部真实数据 点分布不一致、距离较远的点,确定第一类噪声点的方法为:在人体头部的点云数据集中,任意选定一点,寻找该点邻域内距离超出设定阈值的点,将这些阈值外的点设定为第一类噪声点;数据预处理时将第一类噪声点去除。The method for classifying head and face shape based on three-dimensional point cloud coordinates according to claim 4, wherein the first type of noise points are points that are inconsistent with the real data point distribution of the human head and are far away, and determine the first type of noise points. The method of one type of noise point is: in the point cloud data set of the human head, select a point arbitrarily, find the points in the neighborhood of this point whose distance exceeds the set threshold, and set the points outside these thresholds as the first type of noise points ; Remove the first type of noise points during data preprocessing.
  6. 如权利要求4所述的基于三维点云坐标的头面部号型分类方法,其特征在于:所述第二类噪声点为点云中部分数据重叠的点;对于配准后的人体头部点云数据,使用Y-Z平面将头部分割为两部分,在点云边缘上寻找距离最小的两个点A 0、B 0,两点的z坐标分别用z A、z B表示,如果z A≥z B,将这两个点设置为新的边缘点;如果z A<z B,设A 1、B 1分别表示A 0、B 0的下一个边缘数据点,计算A 1与B 0之间的距离d 1和B 1与A 0之间的距离d 2,如果d 1≥d 2,则设定点A 0和B 1为新的边缘点,否则设定A 1和B 0为新边缘点;新边缘点确定后,将其与原边缘点之间的数据点设定为第二类噪声点,在数据预处理时将其删除。 The head and face shape classification method based on three-dimensional point cloud coordinates according to claim 4, wherein: the second type of noise points are points where some data in the point cloud overlap; for the registered human head points For cloud data, use the YZ plane to divide the head into two parts, and find two points A 0 and B 0 with the smallest distance on the edge of the point cloud. The z-coordinates of the two points are represented by z A and z B respectively. If z A ≥ z B , set these two points as new edge points; if z A <z B , set A 1 and B 1 to represent the next edge data points of A 0 and B 0 respectively, and calculate the distance between A 1 and B 0 The distance d 1 and the distance d 2 between B 1 and A 0 , if d 1 ≥ d 2 , set points A 0 and B 1 as new edge points, otherwise set A 1 and B 0 as new edge points After the new edge point is determined, the data point between it and the original edge point is set as the second type of noise point, and it is deleted during data preprocessing.
  7. 如权利要求1所述的基于三维点云坐标的头面部号型分类方法,其特征在于:步骤32包括步骤:The head and face shape classification method based on three-dimensional point cloud coordinates as claimed in claim 1, is characterized in that: step 32 comprises the steps:
    步骤321:根据公式
    Figure PCTCN2021080982-appb-100001
    计算出数据模型的几何中心点坐标,其中n为网格顶点总数,V i为网格模型顶点在三维空间中的位置坐标;
    Step 321: According to the formula
    Figure PCTCN2021080982-appb-100001
    Calculate the geometric center point coordinates of the data model, where n is the total number of grid vertices, and V i is the position coordinates of the grid model vertices in the three-dimensional space;
    步骤322:计算所述几何中心点到坐标原点的平移变换矩阵M m,并分别得到x、y、z方向的最大宽度,进而计算旋转变换矩阵M r,所述旋转变换矩阵M r使得y方向具有最大宽度; Step 322: Calculate the translation transformation matrix M m from the geometric center point to the coordinate origin, and obtain the maximum widths in the x, y, and z directions respectively, and then calculate the rotation transformation matrix M r , the rotation transformation matrix M r makes the y direction has a maximum width;
    步骤323:根据所述平移变换矩阵M m和所述旋转变换矩阵M r得到仿射变换矩阵M,M=M m×M rStep 323: Obtain an affine transformation matrix M according to the translation transformation matrix M m and the rotation transformation matrix M r , where M=M m ×M r ;
    步骤324:根据公式V iN=V i×M(i=0,1,...,n),进行数据模型各网格顶点坐标调整,得到网格顶点坐标调整后的数据模型,V iN表示调整之后的顶点在三维空间中的位置坐标。 Step 324 : According to the formula V iN =V i ×M (i=0,1, . The position coordinates of the adjusted vertices in three-dimensional space.
  8. 如权利要求1所述的基于三维点云坐标的头面部号型分 类方法,其特征在于:步骤33具体包括:The head and face shape classification method based on three-dimensional point cloud coordinates as claimed in claim 1, is characterized in that: step 33 specifically comprises:
    步骤331:标记头顶数据缺失区域的长轴和短轴;Step 331: Mark the long axis and short axis of the missing area of the overhead data;
    步骤332:定义一组法线方向与长轴平行的平面,该组平面与头顶曲面相交出一组平行的平面切片曲线;Step 332: define a group of planes whose normal direction is parallel to the long axis, and the group of planes intersects with the top surface of the head to form a group of parallel plane slice curves;
    步骤333:每条平面切片曲线与扫描获得的头面部层状三维点云数据形成的扫面曲线相交得到两个交点,用这些交点作为重新拟合曲线的部分数据点,进行数据插值,得到完整的头面部数据模型。Step 333 : each plane slice curve intersects the scan curve formed by the layered 3D point cloud data of the head and face obtained by scanning to obtain two intersection points, and use these intersection points as part of the data points of the re-fitting curve, perform data interpolation, and obtain a complete The head and face data model.
  9. 如权利要求1所述的基于三维点云坐标的头面部号型分类方法,其特征在于:步骤34中对进行数据插值后形成的层状扫描曲线上每个顶点的位置进行调整,达到光顺的目的,进而得到最终的头面部数据模型。The method for classifying head and face shapes based on three-dimensional point cloud coordinates according to claim 1, wherein in step 34, the position of each vertex on the layered scanning curve formed after data interpolation is adjusted to achieve smoothness , and then obtain the final head and face data model.
  10. 如权利要求9所述的基于三维点云坐标的头面部号型分类方法,其特征在于:所述步骤4包括:The head and face shape classification method based on three-dimensional point cloud coordinates as claimed in claim 9, it is characterized in that: described step 4 comprises:
    步骤41:设最终的头面部数据模型中,样本点云数据的点集为P=(p 1,p 2,…,p n) T,其中p i=(x i,y i,z i) T,n为样本点的个数; Step 41: Assume that in the final head and face data model, the point set of the sample point cloud data is P=(p 1 ,p 2 ,...,p n ) T , where p i =(x i ,y i ,z i ) T , n is the number of sample points;
    步骤42:选取样本点p i的三维坐标x i,y i,z i作为三个指标,记为X 1,X 2,X 3Step 42: Select the three-dimensional coordinates x i , y i , and zi of the sample point p i as three indicators, denoted as X 1 , X 2 , X 3 ;
    步骤43:对目标点p t=(x t,y t,z t) T搜索其邻域点集P tn=(p 1,p 2,…,p k) T,k为邻域内点的个数,计算所有邻域点到目标点的距离d i及其均值
    Figure PCTCN2021080982-appb-100002
    Step 43: Search the target point p t =(x t , y t , z t ) T for its neighborhood point set P tn =(p 1 , p 2 ,...,p k ) T , where k is the number of points in the neighborhood number, calculate the distance d i from all neighbor points to the target point and its mean
    Figure PCTCN2021080982-appb-100002
    步骤44:寻求3个指标X 1,X 2,X 3的线性组合 Step 44: Seek the linear combination of 3 indicators X 1 , X 2 , X 3
    Figure PCTCN2021080982-appb-100003
    Figure PCTCN2021080982-appb-100003
    使满足条件
    Figure PCTCN2021080982-appb-100004
    to satisfy the condition
    Figure PCTCN2021080982-appb-100004
    进而得到矩阵
    Figure PCTCN2021080982-appb-100005
    其中i=1,2,3。
    to get the matrix
    Figure PCTCN2021080982-appb-100005
    where i=1,2,3.
    步骤45:计算矩阵C 3×3的三个特征向量,从小到大依次表示 为λ 0、λ 1、λ 2,进而得到目标点的局部特征描述子
    Figure PCTCN2021080982-appb-100006
    Step 45: Calculate the three eigenvectors of the matrix C 3×3 , which are expressed as λ 0 , λ 1 , and λ 2 in order from small to large, and then obtain the local feature descriptor of the target point
    Figure PCTCN2021080982-appb-100006
    步骤46:根据主成分分析结果建立主成分分析面板,完成号型分类。Step 46: Establish a principal component analysis panel according to the principal component analysis result, and complete the size classification.
PCT/CN2021/080982 2020-11-11 2021-03-16 Head-face dimension classification method based on three-dimensional point cloud coordinates WO2022099958A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011254896.8 2020-11-11
CN202011254896.8A CN112418030B (en) 2020-11-11 2020-11-11 Head and face model classification method based on three-dimensional point cloud coordinates

Publications (1)

Publication Number Publication Date
WO2022099958A1 true WO2022099958A1 (en) 2022-05-19

Family

ID=74781112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080982 WO2022099958A1 (en) 2020-11-11 2021-03-16 Head-face dimension classification method based on three-dimensional point cloud coordinates

Country Status (3)

Country Link
CN (1) CN112418030B (en)
AU (1) AU2021105639A4 (en)
WO (1) WO2022099958A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385474A (en) * 2023-02-27 2023-07-04 雅客智慧(北京)科技有限公司 Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium
CN117190974A (en) * 2023-09-08 2023-12-08 中国地质调查局西安地质调查中心(西北地质科技创新中心) Slope height calculation method of geological disaster slope unit
CN118037601A (en) * 2024-04-07 2024-05-14 法奥意威(苏州)机器人系统有限公司 Point cloud filling method and electronic equipment
CN117190974B (en) * 2023-09-08 2024-05-31 中国地质调查局西安地质调查中心(西北地质科技创新中心) Slope height calculation method of geological disaster slope unit

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418030B (en) * 2020-11-11 2022-05-13 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates
CN114491718B (en) * 2022-01-26 2023-03-24 广西路桥工程集团有限公司 Geological profile multi-segment line optimization method and system for finite element analysis
CN115345908B (en) * 2022-10-18 2023-03-07 四川启睿克科技有限公司 Human body posture recognition method based on millimeter wave radar
CN116503429B (en) * 2023-06-28 2023-09-08 深圳市华海天贸科技有限公司 Model image segmentation method for biological material 3D printing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200126295A1 (en) * 2018-10-22 2020-04-23 The Hong Kong Polytechnic University Method and/or system for reconstructing from images a personalized 3d human body model and thereof
CN111710036A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN112418030A (en) * 2020-11-11 2021-02-26 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592136B (en) * 2011-12-21 2013-10-16 东南大学 Three-dimensional human face recognition method based on intermediate frequency information in geometry image
CN103544733B (en) * 2013-10-24 2017-01-04 北京航空航天大学 The three-dimensional human head triangular mesh model method for building up analyzed based on Statistical Shape
CN106407985B (en) * 2016-08-26 2019-09-10 中国电子科技集团公司第三十八研究所 A kind of three-dimensional human head point cloud feature extracting method and its device
CN106780591B (en) * 2016-11-21 2019-10-25 北京师范大学 A kind of craniofacial shape analysis and Facial restoration method based on the dense corresponding points cloud in cranium face
CN107767457B (en) * 2017-10-09 2021-04-06 东南大学 STL digital-analog generating method based on point cloud rapid reconstruction
CN109671505B (en) * 2018-10-25 2021-05-04 杭州体光医学科技有限公司 Head three-dimensional data processing method for medical diagnosis and treatment assistance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200126295A1 (en) * 2018-10-22 2020-04-23 The Hong Kong Polytechnic University Method and/or system for reconstructing from images a personalized 3d human body model and thereof
CN111710036A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN112418030A (en) * 2020-11-11 2021-02-26 中国标准化研究院 Head and face model classification method based on three-dimensional point cloud coordinates

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NIU JIANWEI: "Multi-resolution Shape Description and Clustering of 3D Anthropometric Data for Population Fitting Design", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, UNIVERSITY OF CHINESE ACADEMY OF SCIENCES, CN, 15 August 2009 (2009-08-15), CN , XP055930618, ISSN: 1674-022X *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385474A (en) * 2023-02-27 2023-07-04 雅客智慧(北京)科技有限公司 Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN116385474B (en) * 2023-02-27 2024-06-04 雅客智慧(北京)科技有限公司 Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium
CN116563561B (en) * 2023-07-06 2023-11-14 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium
CN117190974A (en) * 2023-09-08 2023-12-08 中国地质调查局西安地质调查中心(西北地质科技创新中心) Slope height calculation method of geological disaster slope unit
CN117190974B (en) * 2023-09-08 2024-05-31 中国地质调查局西安地质调查中心(西北地质科技创新中心) Slope height calculation method of geological disaster slope unit
CN118037601A (en) * 2024-04-07 2024-05-14 法奥意威(苏州)机器人系统有限公司 Point cloud filling method and electronic equipment

Also Published As

Publication number Publication date
CN112418030B (en) 2022-05-13
CN112418030A (en) 2021-02-26
AU2021105639A4 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
WO2022099958A1 (en) Head-face dimension classification method based on three-dimensional point cloud coordinates
CN107123164B (en) Three-dimensional reconstruction method and system for keeping sharp features
Stylianou et al. Crest lines for surface segmentation and flattening
Woo et al. A new segmentation method for point cloud data
OuYang et al. On the normal vector estimation for point cloud data from smooth surfaces
CN111986115A (en) Accurate elimination method for laser point cloud noise and redundant data
CN110516388A (en) Surface tessellation point cloud model ring cutting knife rail generating method based on reconciliation mapping
CN106504331A (en) Tooth modeling method based on three-dimensional model search
JP4780198B2 (en) Authentication system and authentication method
Zhang et al. A statistical approach for extraction of feature lines from point clouds
CN110009671B (en) Grid curved surface reconstruction system for scene understanding
CN108257213A (en) A kind of polygon curve reestablishing method of cloud lightweight
CN111145129A (en) Point cloud denoising method based on hyper-voxels
CN111652241B (en) Building contour extraction method integrating image features and densely matched point cloud features
Zhu et al. 3D reconstruction of plant leaves for high-throughput phenotyping
El Sayed et al. An efficient simplification method for point cloud based on salient regions detection
Agus et al. Shape analysis of 3D nanoscale reconstructions of brain cell nuclear envelopes by implicit and explicit parametric representations
CN115147433A (en) Point cloud registration method
Chen et al. An efficient global constraint approach for robust contour feature points extraction of point cloud
Xie et al. Geometric modeling of Rosa roxburghii fruit based on three-dimensional point cloud reconstruction
Lyu et al. Laplacian-based 3D mesh simplification with feature preservation
CN107356968B (en) Three-dimensional level set fault curved surface automatic extraction method based on crop
Ji et al. Point cloud segmentation for complex microsurfaces based on feature line fitting
Denker et al. On-line reconstruction of CAD geometry
Luo et al. Indoor scene reconstruction: from panorama images to cad models

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21890495

Country of ref document: EP

Kind code of ref document: A1