CN111882595A - Human body semantic feature extraction method and system - Google Patents

Human body semantic feature extraction method and system Download PDF

Info

Publication number
CN111882595A
CN111882595A CN202010736075.1A CN202010736075A CN111882595A CN 111882595 A CN111882595 A CN 111882595A CN 202010736075 A CN202010736075 A CN 202010736075A CN 111882595 A CN111882595 A CN 111882595A
Authority
CN
China
Prior art keywords
human body
model
dimensional human
dimensional
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010736075.1A
Other languages
Chinese (zh)
Other versions
CN111882595B (en
Inventor
童晶
李灵杰
陈正鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN202010736075.1A priority Critical patent/CN111882595B/en
Publication of CN111882595A publication Critical patent/CN111882595A/en
Application granted granted Critical
Publication of CN111882595B publication Critical patent/CN111882595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body semantic feature extraction method and a human body semantic feature extraction system in the technical field of human body measurement, which can accurately acquire human body semantic features and have high matching precision. The method comprises the following steps: acquiring three-dimensional human body characteristic data to obtain a three-dimensional human body model; preprocessing the three-dimensional human body model, adjusting the orientation and the central coordinate of the three-dimensional human body model, and enabling the three-dimensional human body model to be located at the origin of coordinates of a world coordinate system; carrying out shape segmentation on the three-dimensional human body model, extracting skeleton characteristics of the three-dimensional human body model, selecting a template model from a constructed three-dimensional human body semantic characteristic database according to the skeleton characteristics of the three-dimensional human body model based on a skeleton similarity template selection algorithm, and registering the template model and the three-dimensional human body model to obtain a parameterized human body model; and fitting the characteristic sampling points on the parameterized human body model by using a NURBS curve, and calculating the length of the fitted characteristic curve to obtain the human body semantic characteristics of the three-dimensional human body model.

Description

Human body semantic feature extraction method and system
Technical Field
The invention belongs to the technical field of human body measurement, and particularly relates to a human body semantic feature extraction method and system.
Background
With the continuous development of social economy and the continuous improvement of the living standard of people, more and more people pursue personalized life style and have more and more requirements on customized products. Industries such as online shopping, virtual fitting, garment customization, body building industry, ergonomic design and the like also hope to improve the technological content of products and the added value of services. One of the most important directions of lifting is to quantify the size and shape of the human body. For example, on-line virtual fitting requires a three-dimensional model to be generated, which is the same as the real body shape, and custom-made clothes require the acquisition of body shape data of multiple dimensions of a customer.
The existing methods for acquiring human semantic features can be divided into contact methods and non-contact methods, and the traditional manual measurement method is the most common contact method. With the development of the three-dimensional scanning technology, a non-contact method becomes a mainstream meaning feature extraction method in the future, but the extraction matching degree of human semantic features in the prior art is poor, and the matching effect is not ideal.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a human body semantic feature extraction method and a human body semantic feature extraction system, which can accurately acquire human body semantic features and have high matching precision.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a human semantic feature extraction method comprises the following steps: acquiring three-dimensional human body characteristic data to obtain a three-dimensional human body model; preprocessing the three-dimensional human body model, adjusting the orientation and the central coordinate of the three-dimensional human body model, and enabling the three-dimensional human body model to be located at the origin of coordinates of a world coordinate system; carrying out shape segmentation on the three-dimensional human body model, extracting skeleton characteristics of the three-dimensional human body model, selecting a template model from a constructed three-dimensional human body semantic characteristic database according to the skeleton characteristics of the three-dimensional human body model based on a skeleton similarity template selection algorithm, and registering the template model and the three-dimensional human body model to obtain a parameterized human body model; and fitting the characteristic sampling points on the parameterized human body model by using a NURBS curve, and calculating the length of the fitted characteristic curve to obtain the human body semantic characteristics of the three-dimensional human body model.
Further, the registration includes rigid body registration and non-rigid body registration;
the rigid body registration includes:
directed bounding box D for constructing three-dimensional human body model DBAnd a template model
Figure BDA0002605104320000021
Directed bounding box
Figure BDA0002605104320000022
Computing a directed bounding box D from a three-dimensional human model DBTo the template model
Figure BDA0002605104320000023
Directed bounding box
Figure BDA0002605104320000024
Affine transformation of (1) T; applying affine transformation T to the three-dimensional human model D to obtain a first intermediate model D':
Figure BDA0002605104320000025
D′=TD (13)
calculating the optimal rigid body transformation parameters R, T from the first intermediate model D 'to the template model T by using an iterative closest point algorithm, and further acquiring a second intermediate model D':
D″=RD′+t (14)
wherein R is a linear transformation parameter, and t is a translation transformation parameter;
the non-rigid body registration includes:
template model is transformed by Laplacian mesh deformation algorithm
Figure BDA0002605104320000026
To the second intermediate model
Figure BDA0002605104320000027
Deforming to obtain a preliminary registration model
Figure BDA0002605104320000028
In the preliminary registration of the model
Figure BDA0002605104320000029
And establishing a data error function E on the second intermediate model D ″dAnd a smoothness error function EsAnd minimizing the sum of the errors to obtain a parameterized human body model
Figure BDA00026051043200000210
Further, the data error function EdAnd a smoothness error function EsObtained by the following formula:
Figure BDA00026051043200000211
Figure BDA0002605104320000031
wherein n represents a preliminary registration model
Figure BDA0002605104320000034
Number of vertices, v'iFor the coordinates of the ith vertex, the distance function dist2() Distance, T, between the deformed mesh and the nearest compatible point in the target meshi3 x 3 transformation matrix corresponding to ith vertex; defining two points with a normal phasor angle less than 90 DEG and a Euclidean distance less than 10cm as compatible points, wherein the closest point is called the nearest compatible point, TjThe 3 x 3 transform matrix corresponding to the jth vertex,
Figure BDA00026051043200000314
is a model
Figure BDA0002605104320000035
Upper edge, | | | non-conducting phosphorFRepresenting a frobenius norm.
Further, the template selection algorithm for bone similarity specifically includes:
Figure BDA0002605104320000032
wherein, SE is a bone similarity parameter,
Figure BDA0002605104320000036
the coordinates of the p-bone node on the three-dimensional human model D,
Figure BDA0002605104320000037
the coordinates of the q bone nodes on the three-dimensional human model D,
Figure BDA0002605104320000039
as a template model
Figure BDA00026051043200000312
The coordinates of the upper p bone node,
Figure BDA0002605104320000038
as a template model
Figure BDA00026051043200000313
The coordinates of the upper q bone nodes,
Figure BDA00026051043200000310
the symbols of the included angle of the vector are,
Figure BDA00026051043200000311
for weighting factors, m is 1, 2, 3, 4, L1、L2、L3、L4Four weight levels, and the weight level of bone nodes in the human body trunk is L1The weight level of the upper arm, thigh and head is L2The weight level of the lower arm and the lower leg is L3The weight level of the hands and feet is L4And is and
Figure BDA0002605104320000033
further, the method for constructing the three-dimensional human body semantic feature database comprises the following steps: selecting a human body posture model in an open source three-dimensional human body database, carrying out surface subdivision, and setting the number of surface patches and the number of vertexes of the human body posture model to obtain an initial three-dimensional human body database; segmenting the human body posture model in the initial three-dimensional human body database to form a three-dimensional human body segmentation database; extracting three-dimensional human bones from segmentation models in a three-dimensional human body segmentation database to form a three-dimensional human body bone database; and semi-automatically labeling semantic feature sampling points on human bones in the three-dimensional human bone database to form the three-dimensional human body semantic feature database.
Further, the three-dimensional human body semantic feature database brings the shape segmentation result of the three-dimensional human body model generated in the human body semantic feature extraction process into the three-dimensional human body segmentation database, the skeleton feature of the three-dimensional human body model is brought into the three-dimensional human body skeleton database, and the three-dimensional human body model containing the human body semantic features is brought into the three-dimensional human body semantic feature database.
Further, the segmenting the human body posture model in the initial three-dimensional human body database specifically includes:
carrying out hyper-voxel clustering on the vertex data, wherein the result is over-segmentation on the vertex data, and the method comprises the following steps:
1) performing grid division on the model space, forming point clusters, called voxels, by the point cloud data, establishing a spatial index for the voxels, and organizing the voxels by adopting an octree algorithm;
2) selecting a plurality of voxels as seed voxels;
3) calculating the similarity between voxels, and recursively fusing adjacent voxels by seed voxels to form a hyper-voxel;
judging the concave-convex relation between adjacent hyper-voxels according to a preset judgment criterion;
and fitting a cutting plane on the concave edge by adopting a random sampling consistency algorithm, and segmenting the human body posture model.
Further, calculating the similarity between voxels, specifically:
Figure BDA0002605104320000041
wherein, ω iscAs a weighting factor for color differences, ωsWeight factor, ω, for distance differencesnWeight factor being the difference of normals, Dc(i, j) is CIELab color difference, Ds(i, j) is the difference in coordinate distance, Dn(i, j) is the normal difference,
Figure BDA0002605104320000042
is the voxel diameter.
Further, the judgment criterion is determined by the connecting line of the centroids of the adjacent super voxels, the normal direction on the centroids and the common adjacent super voxels of the adjacent super voxels.
A human semantic feature extraction system comprises a processor and a storage device, wherein a plurality of instructions are stored in the storage device and used for the processor to load and execute the steps of the method.
Compared with the prior art, the invention has the following beneficial effects: according to the method, the three-dimensional human body model is subjected to shape segmentation and bone characteristics are extracted, the template model is selected from the established three-dimensional human body semantic characteristic database for registration, the parameterized human body model is obtained, NURBS curves are used for fitting characteristic sampling points on the parameterized human body model, the human body semantic characteristics of the three-dimensional human body model are obtained, the human body semantic characteristic extraction method follows basic knowledge of human anatomy, human body semantic characteristics can be accurately obtained, and the matching effect is good; meanwhile, the constructed three-dimensional human body semantic feature database can bring the input three-dimensional human body model into the database, and the topological structure of the three-dimensional human body semantic feature database is widened.
Drawings
Fig. 1 is a schematic flow chart of constructing a three-dimensional human semantic feature database in a human semantic feature extraction method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a human semantic feature extraction method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a topological structure of a human semantic feature extraction system according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The first embodiment is as follows:
as shown in fig. 1, a method for constructing a three-dimensional human semantic feature database includes: selecting a human body posture model in an open source three-dimensional human body database, carrying out surface subdivision, and setting the number of surface patches and the number of vertexes of the human body posture model to obtain an initial three-dimensional human body database; segmenting the human body posture model in the initial three-dimensional human body database to form a three-dimensional human body segmentation database; extracting three-dimensional human bones from segmentation models in a three-dimensional human body segmentation database to form a three-dimensional human body bone database; and semi-automatically labeling semantic feature sampling points on human bones in the three-dimensional human bone database to form the three-dimensional human body semantic feature database.
A three-dimensional human body database containing a plurality of human body gestures is used as a basic database, an MPI database is taken as an example in the method, models with complicated gestures in the MPI database are removed, surface subdivision is carried out, and the models with the number of face slices of 51576 and the number of vertexes of 25790 are subdivided to obtain an initial three-dimensional human body database.
The method comprises the following steps of carrying out shape segmentation on a model in an initial three-dimensional human body database based on a voxel clustering algorithm and voxel rugosity of adjacent voxels, segmenting the model into a head, an upper arm, a lower arm, a hand, a chest, a waist, a hip, a thigh, a shank and a foot, and carrying out the following steps:
(a) carrying out hyper-voxel clustering on the vertex data, wherein the result is the over-segmentation of the vertex data, and the method comprises the following specific steps:
1) performing grid division on the model space, forming point clusters called voxels by the point cloud data, establishing a spatial index for the voxels, and organizing the point clusters by adopting an octree algorithm;
2) selecting a plurality of voxels from the plurality of voxels as seed voxels;
3) calculating the similarity between voxels, recursively fusing the adjacent voxels by the seed voxels to form the super-voxels, wherein the calculation formula of the voxel similarity is as follows:
Figure BDA0002605104320000061
wherein, ω iscAs a weighting factor for color differences, ωsWeight factor, ω, for distance differencesnAs a weighting factor for the normal difference, the color difference omegacDistance difference omegasDifference of sum normal ωnThe method is used for controlling the influence of color difference, space distance and concave-convex property among vertexes in similarity calculation;
Figure BDA0002605104320000062
is the voxel diameter; dc(i, j) is CIELab color difference, Ds(i, j) is the difference in coordinate distance, Dn(i, j) is the normal difference, and the calculation formulas are respectively:
Dc(i,j)=ΔA=||V(i)-V(j)||CIELab(2)
wherein V (i) is the CIELab color of the ith voxel, and V (j) is the CIELab color of the jth voxel;
Figure BDA0002605104320000063
wherein d isiIs the coordinate of the ith voxel, di=[xi,yi,zi],xiIs the x-coordinate, y, of the ith voxeliIs the y coordinate of the ith voxel, ziIs the z-coordinate of the ith voxel, djIs the coordinate of the jth voxel, dj=[xj,yj,zj],xjIs the x-coordinate, y, of the jth voxeljIs the y coordinate of the jth voxel, zjIs the z coordinate of the jth voxel;
Figure BDA0002605104320000071
wherein n isiIs the normal direction of the i-th individual's diathesis center,
Figure BDA0002605104320000072
Figure BDA0002605104320000073
is the x-coordinate normal to the ith voxel,
Figure BDA0002605104320000074
is the y coordinate normal to the ith voxel,
Figure BDA0002605104320000075
is the z coordinate normal to the ith voxel, njIs the centroid normal of the jth voxel,
Figure BDA0002605104320000076
Figure BDA0002605104320000077
is the x-coordinate normal to the jth voxel,
Figure BDA0002605104320000078
is the y coordinate normal to the jth voxel,
Figure BDA0002605104320000079
z coordinate normal to jth voxel;
(b) judging the concave-convex relation between adjacent superpixels, wherein the edge in a concave voxel is called a concave edge, if the adjacent superpixels are concave, the adjacent superpixels are cut, and the judgment of the concave-convex relation has two criteria, wherein the criterion is that the adjacent superpixels are jointly adjacent to the superpixels according to the normal relation between the connecting line of the centroids of the adjacent superpixels and the centroid, and the criterion is considered on the basis of the criterion one, and the judgment criterion one can be expressed by a formula (5):
CC(svi,svj)=CCb(svi,svj)∧CCb(svi,svc)∧CCb(svj,svc) (5)
wherein, CC (sv)i,svj) The judgment criterion of the concave-convex is CCb(svi,svj) Is a voxel sviAnd voxel svjConcave-convex relationship between them, CCb(svi,svc) Is a voxel sviAnd voxel svcConcave-convex relationship between them, CCb(svj,svc) Is a voxel svjAnd voxel svcA basic concavo-convex relationship therebetween;
CCb(svi,svj) Is calculated as shown in equation (6):
Figure BDA00026051043200000710
wherein N isiIs the centroid normal, N, of the ith voxeljIs the centroid normal, β, of the jth voxelTAs an offset, β (N)i,Nj) Is the angle between the ith voxel and the jth voxel centroid normal,
Figure BDA00026051043200000711
for the symbol of the vector included angle, the calculation formula is as follows:
Figure BDA0002605104320000081
Figure BDA0002605104320000082
the calculation formula of (2) is as follows:
Figure BDA0002605104320000083
wherein, XiIs a voxel sviCenter of mass coordinate of (1), XjIs a voxel svjThe second criterion can be defined as formula (7):
Figure BDA0002605104320000084
wherein, SC (sv)i,svj) A criterion of unevenness, i.e. two, theta (sv)i,svj) The calculating method of (2):
Figure BDA0002605104320000085
θT(β(Ni,Nj) The calculation method of (1):
Figure BDA0002605104320000086
wherein,
Figure BDA0002605104320000087
α=0.25,βoff=25°;
as described above, the unevenness determination method is as shown in equation (10):
conv(svi,svj)=CC(svi,svj)∧SC(svi,svj) (10)
wherein, conv (sv)i,svj) The concave-convex property between the ith hyper-voxel and the jth hyper-voxel;
(c) fitting a segmentation plane on the concave edge by adopting a random sampling consistency algorithm according to the concave-convex relation judged in the step (b), segmenting the human body posture model to form a three-dimensional human body segmentation database, and segmenting the human body model in the database into 16 meaningful components;
the human body joint is positioned at the connecting part of the rigid body part, the boundary centers of the two segmentation subregions are used as skeleton nodes, and the skeleton nodes are connected according to the anatomical relationship of the joint to form a human body skeleton; and (4) performing the operation on all models in the human body segmentation database to obtain a three-dimensional human body skeleton database.
Semantic feature expansion is semi-automatic operation, a feature sampling point of one model is manually marked, and all models can use the same sampling point; and respectively fitting a characteristic curve to the characteristic sampling points of each model to obtain a three-dimensional human body semantic characteristic database.
In the embodiment, two data sources are provided in the constructed three-dimensional human body semantic feature database, and one data source is the expansion of the open-source three-dimensional human body database according to the method; the other is the absorption of the input three-dimensional human body model when the human body semantic feature extraction is carried out. In this embodiment, a template model in the three-dimensional human semantic feature database is labeled with a semantic feature sampling point for fitting a semantic feature curve, the semantic feature sampling point labeled on the template model is located near the actual position of the semantic feature, and the semantic feature curve is a cubic curve for locating the semantic feature.
Example two:
based on the three-dimensional human semantic feature database constructed in the first embodiment, the first embodiment provides a human semantic feature extraction method, as shown in fig. 2 and 3, including: acquiring three-dimensional human body characteristic data to obtain a three-dimensional human body model; preprocessing the three-dimensional human body model, adjusting the orientation and the central coordinate of the three-dimensional human body model, and enabling the three-dimensional human body model to be located at the origin of coordinates of a world coordinate system; carrying out shape segmentation on the three-dimensional human body model, extracting skeleton characteristics of the three-dimensional human body model, selecting a template model from a constructed three-dimensional human body semantic characteristic database according to the skeleton characteristics of the three-dimensional human body model based on a skeleton similarity template selection algorithm, and registering the template model and the three-dimensional human body model to obtain a parameterized human body model; and fitting the characteristic sampling points on the parameterized human body model by using a NURBS curve, and calculating the length of the fitted characteristic curve to obtain the human body semantic characteristics of the three-dimensional human body model.
Calculating the posture similarity of the three-dimensional human body model and a template model in a three-dimensional human body semantic feature database based on a template selection algorithm of the skeleton similarity, and selecting the model with the highest posture similarity as the template model; the method specifically comprises the following steps:
Figure BDA0002605104320000091
wherein, SE is a bone similarity parameter,
Figure BDA0002605104320000102
the coordinates of the p-bone node on the three-dimensional human model D,
Figure BDA0002605104320000103
the coordinates of the q bone nodes on the three-dimensional human model D,
Figure BDA0002605104320000105
as a template model
Figure BDA00026051043200001011
The coordinates of the upper p bone node,
Figure BDA0002605104320000104
as a template model
Figure BDA00026051043200001012
The coordinates of the upper q bone nodes,
Figure BDA0002605104320000106
the symbols of the included angle of the vector are,
Figure BDA0002605104320000107
for weighting factors, m is 1, 2, 3, 4, L1、L2、L3、L4Four weight levels, and the weight level of bone nodes in the human body trunk is L1The weight level of the upper arm, thigh and head is L2The weight level of the lower arm and the lower leg is L3The weight level of the hands and feet is L4And is and
Figure BDA0002605104320000101
selecting a template model from a constructed three-dimensional human body semantic feature database according to the bone features of the three-dimensional human body model to register with the three-dimensional human body model to obtain a parameterized human body model, wherein rigid body registration and non-rigid body registration algorithms are adopted for registration;
rigid body registration includes:
based on the preliminary alignment of the oriented bounding box, the two models are overlapped as much as possible in the global coordinate system: directed bounding box D for constructing three-dimensional human body model DBAnd a template model
Figure BDA00026051043200001015
Directed bounding box
Figure BDA00026051043200001010
Computing a directed bounding box D from a three-dimensional human model DBTo the template model
Figure BDA0002605104320000108
Directed bounding box
Figure BDA0002605104320000109
The radial transformation of (A) T; applying radiation transformation to the three-dimensional human body model D to obtain a first intermediate model D';
Figure BDA00026051043200001014
D′=TD (13)
based on the final alignment of the iteration closest point, calculating a rotation matrix and a translation matrix between the two models, and adjusting the space coordinates of the template model according to the rotation matrix and the translation matrix: calculating a first intermediate model D' to the template model by adopting an iterative closest point algorithm
Figure BDA00026051043200001013
And further obtaining a second intermediate model D ″:
D″=RD′+t (14)
wherein R is a linear transformation parameter, and t is a translation transformation parameter;
the non-rigid registration includes:
coarse registration based on Laplacian mesh deformation algorithm: template model is transformed by Laplacian mesh deformation algorithm
Figure BDA0002605104320000113
To the second intermediate model
Figure BDA00026051043200001110
Deforming to obtain a preliminary registration model
Figure BDA0002605104320000114
Fine registration based on data error and smoothness error: in the preliminary registration of the model
Figure BDA0002605104320000115
And establishing a data error function E on the second intermediate model D ″dAnd a smoothness error function EsAnd minimizing the sum of the errors to obtain a parameterized human body model
Figure BDA0002605104320000118
Data error function EdAnd a smoothness error function EsObtained by the following formula:
Figure BDA0002605104320000111
Figure BDA0002605104320000112
wherein n represents a preliminary registration model
Figure BDA0002605104320000116
Number of vertices, v'iFor the coordinates of the ith vertex, the distance function dist2() Distance, T, between the deformed mesh and the nearest compatible point in the target meshi3 × 3 transformation matrix corresponding to ith vertex; defining two points with a normal phasor angle of less than 90 degrees and a Euclidean distance of less than 10cm as compatible points, wherein the closest point is called the nearest compatible point; t isjThe 3 x 3 transform matrix corresponding to the jth vertex,
Figure BDA0002605104320000117
is a model
Figure BDA0002605104320000119
Upper edge, | | | non-conducting phosphorFRepresenting a frobenius norm.
The result of the registration is a parameterized human body model; and marking characteristic sampling points on the parameterized human body model, fitting the characteristic sampling points by using a NURBS curve, and calculating the length of the fitted characteristic curve to obtain the human body semantic characteristics of the three-dimensional human body model. In the process of extracting human semantic features, the three-dimensional human semantic feature database brings shape segmentation results of a three-dimensional human model generated in the process of extracting the human semantic features into the three-dimensional human segmentation database, skeletal features of the three-dimensional human model are brought into the three-dimensional human skeletal database, and the three-dimensional human model containing the human semantic features is brought into the three-dimensional human semantic feature database.
Example three:
based on the human body semantic feature extraction method provided by the second embodiment, the present embodiment provides a human body semantic feature extraction system, which includes a processor and a storage device, where multiple instructions are stored in the storage device, and are used for the processor to load and execute the steps of the method provided by the second embodiment.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A human semantic feature extraction method is characterized by comprising the following steps:
acquiring three-dimensional human body characteristic data to obtain a three-dimensional human body model;
preprocessing the three-dimensional human body model, adjusting the orientation and the central coordinate of the three-dimensional human body model, and enabling the three-dimensional human body model to be located at the origin of coordinates of a world coordinate system;
carrying out shape segmentation on the three-dimensional human body model, extracting skeleton characteristics of the three-dimensional human body model, selecting a template model from a constructed three-dimensional human body semantic characteristic database according to the skeleton characteristics of the three-dimensional human body model based on a skeleton similarity template selection algorithm, and registering the template model and the three-dimensional human body model to obtain a parameterized human body model;
and fitting the characteristic sampling points on the parameterized human body model by using a NURBS curve, and calculating the length of the fitted characteristic curve to obtain the human body semantic characteristics of the three-dimensional human body model.
2. The human semantic feature extraction method of claim 1, wherein the registration comprises rigid registration and non-rigid registration;
the rigid body registration includes:
directed bounding box D for constructing three-dimensional human body model DBAnd a template model
Figure FDA0002605104310000011
Directed bounding box
Figure FDA0002605104310000012
Computing a directed bounding box D from a three-dimensional human model DBTo the template model
Figure FDA0002605104310000013
Directed bounding box
Figure FDA0002605104310000014
Affine transformation of (1) T; applying affine transformation T to the three-dimensional human model D to obtain a first intermediate model D':
Figure FDA0002605104310000015
D′=TD (13)
calculating a first intermediate model D' to the template model by adopting an iterative closest point algorithm
Figure FDA0002605104310000016
And further obtaining a second intermediate model D ″:
D″=RD′+t (14)
wherein R is a linear transformation parameter, and t is a translation transformation parameter;
the non-rigid body registration includes:
template model is transformed by Laplacian mesh deformation algorithm
Figure FDA0002605104310000021
To the second intermediate model
Figure FDA0002605104310000022
Deforming to obtain a preliminary registration model
Figure FDA0002605104310000023
In the preliminary registration of the model
Figure FDA0002605104310000024
And establishing a data error function E on the second intermediate model D ″dAnd a smoothness error function EsAnd minimizing the sum of the errors to obtain a parameterized human body model
Figure FDA0002605104310000025
3. The human semantic feature extraction method according to claim 2, wherein the data error function EdAnd a smoothness error function EsObtained by the following formula:
Figure FDA0002605104310000026
Figure FDA0002605104310000027
wherein n represents a preliminary registration model
Figure FDA0002605104310000028
Number of vertices of vi' is the coordinate of the ith vertex, distance function dist2() Distance, T, between the deformed mesh and the nearest compatible point in the target meshi3 x 3 transformation matrix corresponding to ith vertex; defining two points with a normal phasor angle of less than 90 degrees and a Euclidean distance of less than 10cm as compatible points, wherein the closest point is called the nearest compatible point; t isjThe 3 x 3 transform matrix corresponding to the jth vertex,
Figure FDA0002605104310000029
is a model
Figure FDA00026051043100000210
Upper edge, | | | non-conducting phosphorFRepresenting a frobenius norm.
4. The human semantic feature extraction method according to claim 1, wherein the template selection algorithm of the skeletal similarity specifically comprises:
Figure FDA00026051043100000211
wherein, SE is a bone similarity parameter,
Figure FDA00026051043100000212
the coordinates of the p-bone node on the three-dimensional human model D,
Figure FDA00026051043100000213
the coordinates of the q bone nodes on the three-dimensional human model D,
Figure FDA00026051043100000214
as a template model
Figure FDA00026051043100000215
The coordinates of the upper p bone node,
Figure FDA00026051043100000216
as a template model
Figure FDA00026051043100000217
The coordinates of the upper q bone nodes,
Figure FDA00026051043100000218
the symbols of the included angle of the vector are,
Figure FDA00026051043100000219
for weighting factors, m is 1, 2, 3, 4, L1、L2、L3、L4Four weight levels, and the weight level of bone nodes in the human body trunk is L1The weight level of the upper arm, thigh and head is L2The weight level of the lower arm and the lower leg is L3The weight level of the hands and feet is L4And is and
Figure FDA0002605104310000031
5. the human semantic feature extraction method of claim 1, wherein the three-dimensional human semantic feature database is constructed by:
selecting a human body posture model in an open source three-dimensional human body database, carrying out surface subdivision, and setting the number of surface patches and the number of vertexes of the human body posture model to obtain an initial three-dimensional human body database;
segmenting the human body posture model in the initial three-dimensional human body database to form a three-dimensional human body segmentation database;
extracting three-dimensional human bones from segmentation models in a three-dimensional human body segmentation database to form a three-dimensional human body bone database;
and semi-automatically labeling semantic feature sampling points on human bones in the three-dimensional human bone database to form the three-dimensional human body semantic feature database.
6. The human semantic feature extraction method according to claim 5, wherein the three-dimensional human semantic feature database incorporates a shape segmentation result of a three-dimensional human model generated in the human semantic feature extraction process into the three-dimensional human segmentation database, a skeletal feature of the three-dimensional human model into the three-dimensional human skeletal database, and a three-dimensional human model including human semantic features into the three-dimensional human semantic feature database.
7. The human semantic feature extraction method according to claim 5, wherein the segmenting of the human pose model in the initial three-dimensional human database is specifically:
carrying out hyper-voxel clustering on the vertex data, wherein the result is over-segmentation on the vertex data, and the method comprises the following steps:
1) performing grid division on the model space, forming point clusters, called voxels, by the point cloud data, establishing a spatial index for the voxels, and organizing the voxels by adopting an octree algorithm;
2) selecting a plurality of voxels as seed voxels;
3) calculating the similarity between voxels, and recursively fusing adjacent voxels by seed voxels to form a hyper-voxel;
judging the concave-convex relation between adjacent hyper-voxels according to a preset judgment criterion;
and fitting a cutting plane on the concave edge by adopting a random sampling consistency algorithm, and segmenting the human body posture model.
8. The human semantic feature extraction method according to claim 7, wherein the similarity between voxels is calculated, specifically as follows:
Figure FDA0002605104310000041
wherein, ω iscAs a weighting factor for color differences, ωsWeight factor, ω, for distance differencesnWeight factor being the difference of normals, Dc(i, j) is CIELab color difference, Ds(i, j) is the difference in coordinate distance, Dn(i, j) is the normal difference,
Figure FDA0002605104310000042
is the voxel diameter.
9. The human semantic feature extraction method according to claim 7, wherein the criterion is determined by a line connecting the centroids of adjacent voxels, a normal direction on the centroids, and a common adjacent voxel of the adjacent voxels.
10. A human semantic feature extraction system comprising a processor and a storage device, wherein the storage device stores a plurality of instructions for the processor to load and execute the steps of the method according to any one of claims 1 to 9.
CN202010736075.1A 2020-07-28 2020-07-28 Human body semantic feature extraction method and system Active CN111882595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010736075.1A CN111882595B (en) 2020-07-28 2020-07-28 Human body semantic feature extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010736075.1A CN111882595B (en) 2020-07-28 2020-07-28 Human body semantic feature extraction method and system

Publications (2)

Publication Number Publication Date
CN111882595A true CN111882595A (en) 2020-11-03
CN111882595B CN111882595B (en) 2024-01-26

Family

ID=73202067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010736075.1A Active CN111882595B (en) 2020-07-28 2020-07-28 Human body semantic feature extraction method and system

Country Status (1)

Country Link
CN (1) CN111882595B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067146A (en) * 2021-09-24 2022-02-18 北京字节跳动网络技术有限公司 Evaluation method, evaluation device, electronic device and computer-readable storage medium
CN116523973A (en) * 2023-01-10 2023-08-01 北京长木谷医疗科技股份有限公司 Bone registration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038750A (en) * 2016-02-03 2017-08-11 上海源胜文化传播有限公司 A kind of three-dimensional (3 D) manikin generates system and method
US20180253909A1 (en) * 2017-03-06 2018-09-06 Sony Corporation Information processing apparatus, information processing method and user equipment
CN108876881A (en) * 2018-06-04 2018-11-23 浙江大学 Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect
CN111080776A (en) * 2019-12-19 2020-04-28 中德人工智能研究院有限公司 Processing method and system for human body action three-dimensional data acquisition and reproduction
US20200226827A1 (en) * 2019-01-10 2020-07-16 Electronics And Telecommunications Research Institute Apparatus and method for generating 3-dimensional full body skeleton model using deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038750A (en) * 2016-02-03 2017-08-11 上海源胜文化传播有限公司 A kind of three-dimensional (3 D) manikin generates system and method
US20180253909A1 (en) * 2017-03-06 2018-09-06 Sony Corporation Information processing apparatus, information processing method and user equipment
CN108876881A (en) * 2018-06-04 2018-11-23 浙江大学 Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect
US20200226827A1 (en) * 2019-01-10 2020-07-16 Electronics And Telecommunications Research Institute Apparatus and method for generating 3-dimensional full body skeleton model using deep learning
CN111080776A (en) * 2019-12-19 2020-04-28 中德人工智能研究院有限公司 Processing method and system for human body action three-dimensional data acquisition and reproduction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李灵杰;童晶;步文瑜;孙海舟;陈正鸣;: "基于模板匹配的三维人体语义特征提取算法", 计算机与现代化, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067146A (en) * 2021-09-24 2022-02-18 北京字节跳动网络技术有限公司 Evaluation method, evaluation device, electronic device and computer-readable storage medium
CN116523973A (en) * 2023-01-10 2023-08-01 北京长木谷医疗科技股份有限公司 Bone registration method and device

Also Published As

Publication number Publication date
CN111882595B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
Yang et al. Physics-inspired garment recovery from a single-view image
Lee et al. Mesh scissoring with minima rule and part salience
CN110163728B (en) Personalized clothing customization plate making method
Li et al. Fitting 3D garment models onto individual human models
Yang et al. Detailed garment recovery from a single-view image
CN104091162B (en) The three-dimensional face identification method of distinguished point based
CN106874850A (en) One kind is based on three-dimensional face point cloud characteristic point positioning method
Kelemen et al. Three-dimensional model-based segmentation of brain MRI
US8411081B2 (en) Systems and methods for enhancing symmetry in 2D and 3D objects
CN106485695A (en) Medical image Graph Cut dividing method based on statistical shape model
Yoshizawa et al. Fast, robust, and faithful methods for detecting crest lines on meshes
CN111882595B (en) Human body semantic feature extraction method and system
CN109034131A (en) A kind of semi-automatic face key point mask method and storage medium
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
Koehl et al. Automatic alignment of genus-zero surfaces
CN109118455B (en) Ancient human skull craniofacial interactive restoration method based on modern soft tissue distribution
Ma et al. Kinematic skeleton extraction from 3D articulated models
Jacinto et al. Multi-atlas automatic positioning of anatomical landmarks
CN106682575A (en) Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
US11461914B2 (en) Measuring surface distances on human bodies
Wang et al. Automatic recognition and 3D modeling of the neck-shoulder human shape based on 2D images
Hwan Sul et al. Regeneration of 3D body scan data using semi‐implicit particle‐based method
Rantoson et al. Improved curvature-based registration methods for high-precision dimensional metrology
Wuhrer et al. Human shape correspondence with automatically predicted landmarks
Jia et al. 3D personalized human modeling and deformation technology for garment CAD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant