CN107615337B - Three-dimensional hair modeling method and device - Google Patents

Three-dimensional hair modeling method and device Download PDF

Info

Publication number
CN107615337B
CN107615337B CN201680025609.1A CN201680025609A CN107615337B CN 107615337 B CN107615337 B CN 107615337B CN 201680025609 A CN201680025609 A CN 201680025609A CN 107615337 B CN107615337 B CN 107615337B
Authority
CN
China
Prior art keywords
hair
point
head model
template
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680025609.1A
Other languages
Chinese (zh)
Other versions
CN107615337A (en
Inventor
李阳
李江伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN107615337A publication Critical patent/CN107615337A/en
Application granted granted Critical
Publication of CN107615337B publication Critical patent/CN107615337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A three-dimensional hair modeling method and a three-dimensional hair modeling device are used for solving the problems that the reuse complexity of a hair template is high and the quick reuse of the hair template cannot be met. The method comprises the following steps: determining a first coordinate transformation relation between a 3D head model of the hair to be created and a preset reference head model, determining a second coordinate transformation relation between the 3D head model and a preset 3D hair template, and registering the 3D head model and the 3D hair template based on the first coordinate transformation relation and the second coordinate transformation relation; the 3D hair template is matched with the reference head model; when detecting that the 3D hair template after registration has an error area, using a Radial Basis Function (RBF) to deform the hair in the error area of the 3D hair template so as to correct the error area; the error area comprises a scalp area which does not completely cover the 3D head model in the 3D head template or a hair root area which covers a non-scalp area of the 3D head model in the 3D head template.

Description

Three-dimensional hair modeling method and device
Technical Field
The present application relates to the field of three-dimensional image processing and computer graphics, and more particularly, to a method and apparatus for three-dimensional hair modeling.
Background
With the continuous improvement of the performance of microprocessors (Advanced RISC Machines, ARM for short), high-quality three-dimensional (3D) character reconstruction based on mobile phones has become a reality and is popular with various mobile phone manufacturers and users. Human hair is a very important part in character portraits and digital character animations, and the creation of hair can significantly enhance the realism of avatars and virtual characters in entertainment applications. Therefore, modeling the 3D character head becomes a key issue to be solved in the 3D character reconstruction process.
The scheme of 3D hair modeling is mainly divided into two types, one type is hair modeling, and the other type is hair model reusing. The method mainly comprises the steps of manually establishing a model, generally selecting a hair growing area on the scalp of an existing human head model, and then setting the growth direction, the length, the curling degree and the like of the hair in the area. For the 3D face model reconstructed by the mobile phone, if the corresponding 3D hair model is generated manually, the occupied computing resources and the consumed time are unacceptable for the user.
Hair model reuse refers to creating new hair models on different 3D character models that are similar to existing hair models. The prior art reuses the hair model as follows: the method comprises the steps of storing one or more 3D hair templates in advance, receiving an orthographic hair image and a 3D head model, and then carrying out deformation processing on the 3D hair template based on the characteristics of the orthographic hair image and in combination with the 3D head model, so as to generate a real 3D hair model.
When the 3D hair template is subjected to deformation processing, firstly, defining key points of the 3D hair template and areas divided according to the key points; then detecting the hair shape of the front-view hair image; dividing the detected hair shape into a common part and a personalized part, wherein the common part refers to a part of the hair which is relatively similar among different individuals, and the personalized part refers to a part of the hair which is relatively different among different individuals; carrying out approximate treatment on the divided individual parts, and carrying out hair styling modeling on the hair pieces of the individual parts after the approximate treatment; matching the boundary key points of the defined 3D hair model with the boundaries of the divided common parts by combining the 3D head model, and performing 3D data interpolation on the matched result area; and combining each 3D hair piece after modeling the hair style with the 3D data interpolation result to generate a real 3D hair model.
In the prior art, the 3D hair model needs to be projected into a 2D form when the 3D hair template is subjected to deformation processing, selected key points are in the 2D form, and then the 2D key points are matched with the common part and the individual part of the front-view hair image, so that the prior art is limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image and the modeling aiming at the individual part of the hair, the reuse complexity of the hair template is high, and the quick reuse of the hair template cannot be met.
Disclosure of Invention
The application provides a three-dimensional hair modeling method and a three-dimensional hair modeling device, which are used for solving the problems that in the prior art, the reuse complexity of a hair template is high, and the quick reuse of the hair template cannot be met.
In a first aspect, the present application provides a method of three-dimensional hair modeling, the method comprising:
determining a first coordinate transformation relation between a 3D head model of a hair to be created and a preset reference head model, determining a second coordinate transformation relation between the 3D head model and a preset 3D hair template, and then registering the 3D head model and the 3D hair template based on the first coordinate transformation relation and the second coordinate transformation relation; wherein the preset 3D hair template is matched with the preset reference head model; when it is detected that an error area exists in the 3D hair template after the registration, deforming the hair in the error area of the 3D hair template by using a Radial Basis Function (RBF) so as to correct the error area, wherein the error area comprises an area of the 3D hair template, which does not completely cover the scalp layer of the 3D head model, or an area of the 3D hair template, which is not the scalp layer and is covered by a hair root area formed by hair root points.
By the scheme, the 3D hair model with good fitting effect can be quickly constructed for the 3D head model in the terminal with relatively low storage capacity and calculation capacity, the reality of the constructed 3D hair model can be compared with that of a hair model generated by adopting a hair modeling method, and a large amount of manual interaction and operation time is saved; compared with the existing hair model reusing technology, the method and the device are not limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image, and the modeling aiming at the individual part of the hair, so that the reusability of the 3D hair template can be effectively improved, and the constructed hair model can more accurately keep the appearance of the prototype hair.
In one possible design, determining a first coordinate transformation relationship between a 3D head model of a hair to be created and a preset reference head model, and determining a second coordinate transformation relationship between the 3D head model and a preset 3D hair template, and registering the 3D head model and the 3D hair template based on the first coordinate transformation relationship and the second coordinate transformation relationship may be implemented as follows:
determining facial key points and scalp layer key points of a 3D head model of hair to be created and determining preset facial key points of a reference head model; matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs; then determining a first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs; obtaining the three-dimensional coordinates of the 3D hair template in a target coordinate system according to the first coordinate transformation relation, wherein the target coordinate system is the coordinate system where the 3D head model is located; matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs; determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair; and registering the 3D hair template and the 3D head model according to the second coordinate transformation relation.
In the above design, the first coordinate transformation relationship and the second coordinate transformation relationship are determined by the facial key points and the scalp layer key points of the 3D head model and the facial key points of the reference head model and the hair root points of the 3D hair template, so that the amount of calculation can be reduced, and the method can be applied to terminal devices with relatively low storage capacity and relatively low calculation capacity.
In one possible design, the deformation of the hair in the error region of the 3D hair template using the radial basis function RBF may be implemented as follows:
performing the following respectively for each hair in the error region of the 3D hair template: selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point; and taking the hair matching point pairs as input parameters of the radial basis function, and carrying out deformation on each hair.
Wherein the first key points may include a root point and a tip point of each hair.
In the design, the selected first key point is matched with the coordinate point in the scalp layer by using the nearest neighbor algorithm, so that the matching is accurate and the calculated amount is small.
In one possible design, the deformation of the hair in the error region of the 3D hair template using the radial basis function RBF may be implemented as follows:
performing the following respectively for each hair in the error region of the 3D hair template: selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point; using the hair matching point pairs as input parameters of radial basis functions to deform each hair; dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
The second key points can comprise a hair root point and a hair tip point in each hair; the third key point may include two end points of each of the portions of hair.
In one possible design, the deforming hair in the erroneous region of the 3D hair template using a radial basis function, RBF, comprises:
performing the following respectively for each hair in the error region of the 3D hair template: dividing each hair into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Wherein the fourth key point may include two end points of each part of hair.
In a second aspect, the present application provides a three-dimensional hair modeling apparatus comprising:
a first determination unit for determining a first coordinate transformation relationship between a 3D head model of a hair to be created and a preset reference head model;
a second determining unit, configured to determine a second coordinate transformation relationship between the 3D head model and a preset 3D hair template, where the preset 3D hair template matches the preset reference head model;
a registration unit, configured to register the 3D head model with the 3D hair template based on the determination unit determining the first coordinate transformation relationship and the second coordinate transformation relationship;
the detection unit is used for detecting whether the 3D hair template after registration has an error area;
and the deformation unit is used for deforming the hair in the error area of the 3D hair template detected by the detection unit by using a Radial Basis Function (RBF) so as to correct the error area, wherein the error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D hair template or an area which is not the scalp layer of the 3D head model and is covered by a hair root area formed by hair root points in the 3D hair template.
By the scheme, the 3D hair model with good fitting effect can be quickly constructed for the 3D head model in the terminal with relatively low storage capacity and calculation capacity, the reality of the constructed 3D hair model can be compared with that of a hair model generated by adopting a hair modeling method, and a large amount of manual interaction and operation time is saved; compared with the existing hair model reusing technology, the method and the device are not limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image, and the modeling aiming at the individual part of the hair, so that the reusability of the 3D hair template can be effectively improved, and the constructed hair model can more accurately keep the appearance of the prototype hair.
In a possible design, the first determining unit is specifically configured to: determining facial key points and scalp layer key points of a 3D head model of hair to be created; determining face key points of a preset reference head model; matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs; determining a first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs; the registration unit is used for obtaining a three-dimensional coordinate of the 3D hair template in a target coordinate system according to the first coordinate transformation relation, wherein the target coordinate system is a coordinate system where the 3D head model is located; the second determining unit is specifically configured to: matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs; determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair; and the registration unit is used for registering the 3D hair template and the 3D head model according to the second coordinate transformation relation.
In one possible design, the shape-changing unit is specifically configured to: performing the following respectively for each hair in the error region of the 3D hair template: selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point; and taking the hair matching point pairs as input parameters of the radial basis function, and carrying out deformation on each hair.
Wherein the first key points may include a root point and a tip point of each hair.
In one possible design, the shape-changing unit is specifically configured to: performing the following respectively for each hair in the error region of the 3D hair template: selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point; using the hair matching point pairs as input parameters of radial basis functions to deform each hair; dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
The second key points can comprise a hair root point and a hair tip point in each hair; the third key point may include two end points of each of the portions of hair.
In one possible design, the shape-changing unit is specifically configured to: performing the following respectively for each hair in the error region of the 3D hair template: dividing each hair into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Wherein the fourth key point may include two end points of each part of hair.
In a third aspect, the present application further provides a three-dimensional hair modeling apparatus, comprising: a communication interface, a memory, and a processor. The communication interface is used for configuring a 3D head model of the hair to be created, a preset reference head model and a preset 3D hair template, and the preset reference head model is matched with the preset 3D hair template; the memory is used for storing program codes executed by the processor; the processor is configured to execute the program code stored in the memory, and specifically execute the operations designed and executed in any one of the first aspect.
In a fourth aspect, the present application also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the operations as set forth in any of the designs of the first aspect.
Drawings
FIG. 1 is a schematic representation of facial keypoints for a selected 3D head model as provided herein;
FIG. 2 is a schematic diagram of facial keypoints for a selected reference head model as provided herein;
FIG. 3 is a schematic diagram of selected scalp layer keypoints for a 3D head model as provided herein;
fig. 4 is a schematic diagram illustrating a corresponding relationship between key points of a 3D hair template and matching points thereof;
FIG. 5 is a schematic diagram of the deformation of hair from whole to partial according to the present application;
FIG. 6 is a flow chart of a three-dimensional hair modeling method provided herein;
FIG. 7 is a flowchart of a 3D hair template and 3D head model registration method provided herein;
FIG. 8 is a schematic diagram of a three-dimensional hair modeling method provided herein;
FIG. 9 is a schematic view of a three-dimensional hair modeling apparatus provided herein;
FIG. 10 is a schematic view of another three-dimensional hair modeling apparatus provided herein;
fig. 11 is a schematic diagram of the three-dimensional hair modeling effect provided by the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a three-dimensional hair modeling method and a three-dimensional hair modeling device, which are used for solving the problems that in the prior art, the reuse complexity of a hair template is high, and the quick reuse of the hair template cannot be met. The method and the device are based on the same inventive concept, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.
The present application may be applied to a terminal with relatively low storage capacity and relatively low computing capacity, and certainly, may also be applied to an electronic device with relatively high storage capacity and relatively high computing capacity, which is not specifically limited in the present application.
One embodiment of a three-dimensional hair modeling method provided herein includes:
a three-dimensional (3D) head model of hair to be created may be prepared in advance, and the 3D head model may be divided into a three-dimensional scalp layer region and a three-dimensional non-hair layer region. All three-dimensional coordinate points that make up this 3D head model constitute a set of 3D head model points. The set of points in the 3D head model may be divided into two subsets, including a three-dimensional set of scalp layer area points and a three-dimensional set of non-hair layer area points.
The three-dimensional non-scalp layer region point set comprises all facial feature points and a region enclosed by a face outline formed by the points, namely an ear region to a chin region. The information corresponding to the three-dimensional (3D) head model may further include an index number of each facial feature point, and the index numbers of the facial feature points are recorded in the three-dimensional non-scalp layer area point set. The other is a three-dimensional scalp area, and characteristic points of the forehead, the temple and the periphery of the ear can be marked on the outline of the three-dimensional scalp area, and the three-dimensional scalp area is basically symmetrical left and right. Of course, the scalp layer feature points can be indexed and numbered and recorded in the three-dimensional scalp layer region point set.
In the application, a 3D hair template is prepared in advance and corresponds to a reference head model, namely the 3D hair template is constructed for the reference head model, and the 3D hair template is matched with the reference head model. The 3D hair template and the reference head model may be one previously stored in a 3D hair model library. The 3D hair model library stores a plurality of 3D hair templates and reference head models, and a user may select one of the 3D hair template libraries in advance. All three-dimensional coordinate points constituting the reference head model constitute a reference head model point set, and all three-dimensional coordinate points constituting the 3D hair template constitute a 3D hair template point set.
The three-dimensional hair modeling device determines a first coordinate corresponding relation between a 3D head model of hair to be created and a preset reference head model.
Specifically, when determining the first coordinate correspondence, firstly determining a face key point of a 3D head model of a hair to be created and a scalp layer key point of the 3D head model; and determining a face key point of the reference head model, and matching the face key point of the 3D head model with the face key point of the reference head model to obtain a face matching point pair.
In the present application, when determining facial key points of a 3D head model of a hair to be created, scalp layer key points of the 3D head model, the following may be implemented:
the determined facial keypoints of the 3D head model can be divided into two categories: the first category is facial keypoints and may include eye regions, nose regions, mouth regions, chin regions, and so forth; the second category is the key points of the scalp layer and may include the forehead area, the back of the brain area, the ear root area, etc. For example, as shown in FIG. 1, the facial keypoints for the determined 3D head model include 39, R1-R39.
The existing feature extraction mode can be adopted when the key points are determined, or the key points can be selected manually, certainly, the determined face key points and scalp layer key points of the 3D head model can also be established, a part of feature points can also be selected from the index numbers of the face feature points in the three-dimensional non-scalp layer region point set to be used as the face key points, and a part of feature points can also be selected from the index numbers of the scalp layer feature points in the three-dimensional scalp layer region point set to be used as the scalp layer key points.
In the present application, the facial key points of the reference head model on which the 3D hair template is determined may include an eye region, a nose region, a mouth region, a chin region, etc., in the same manner as the facial key points of the 3D head model are determined. As shown in FIG. 2, the determined facial keypoints for the reference head model include 39, A1-A39. The existing feature extraction mode can be adopted when the key points are determined, or the key points can be selected manually, or the determined face key points can be established when the reference head model is established.
After determining the face key points of the 3D head model and the reference head model, the three-dimensional hair modeling device matches the face key points of the 3D head model and the reference head model to obtain face matching point pairs.
Since the face key points of the 3D head model and the face key points of the reference head model are selected in the same manner, the matching may be performed based on the index numbers to obtain the face matching point pairs, or the face matching point pairs may be manually selected. For example, keypoint matching posterior matching pairs shown in fig. 1 and 2 include: R1-A1, R2-A2, R6-A6, R7-A7, R10-A10, R13-A13, R14-A14, R19-A19, R20-A20 and R33-A33.
The three-dimensional hair modeling device determines a first coordinate transformation relationship between the 3D head model and a reference head model according to the face matching point pairs.
In particular implementations, a first coordinate transformation relationship between the 3D head model and the reference head model is calculated using the pairs of face matching points. Wherein the first coordinate transformation relationship s.r.t includes a scaling relationship, a rotation relationship, and a translation relationship. Wherein the scaling relation is realized by s representing a scaling factor s, the rotation relation is realized by a rotation matrix R, and the translation relation is realized by a translation vector t.
It is assumed in this application that the center point in the 3D hair model is the origin of the world coordinate system. The scaling factor s is calculated by the following equation (1):
Figure GDA0002251704530000071
wherein, x, y and z in the formula represent coordinates of the face key points corresponding to the 3D head model in the face key point pair, and the formula represents coordinates of the face key points corresponding to the 3D head model in the face key point pair
Figure GDA0002251704530000072
Representing the coordinates of the corresponding face key points of the reference head model in the face key point pairs, n represents the face key pointsThe total number of pairs.
A method of calculating a rotation matrix R, translation vector t by:
the dimensions are unified before calculation R, t, and the coordinates of all feature points of the 3D head model, the reference head model and the 3D hair template may be normalized. Then face keypoints pairs and Singular Value Decomposition (SVD) algorithms are used to compute R, t.
Suppose that the matched pairs of face key points are X respectivelyiAnd YiThen, the definition of the essential matrix E is as formula (2):
Yi TEXi0 formula (2)
Let P ═ I |0 be the external parameter of the 3D head model, I denote the norm matrix, i.e. the norm matrix when no rotation occurs, 0 denotes no translation occurs. The outer parameter of the reference head model is P' ═ R | t ], then the intrinsic matrix E between the 3D head model and the 3D hair template is as in equation (3):
E=[t]xr formula (3)
SVD the intrinsic matrix E, can be obtained as formula (4):
E=U diag(1,1,0)VTformula (4)
Then the external reference P' of the 3D hair template has the following four possible choices:
P’=[UWVT|u3];[UWVT|-u3];[UWTVT|u3];[UWTVT|-u3];
wherein
Figure GDA0002251704530000073
u3=U(0,0,1)T
Optionally, a set of matched facial keypoint pairs is validated against the four possible values of P', and a unique correct solution can be determined from the four different solutions. Let P ═ UWVT|u3]To correct the solution, then R UWVT,t=u3
And after the first coordinate transformation relation s.R.t is obtained, obtaining the three-dimensional coordinates of the preset 3D hair template in the target coordinate system according to the first coordinate transformation relation. And the target coordinate system is a coordinate system where the 3D head model is located.
In specific implementation, all three-dimensional coordinate points included in the 3D hair template point set can be subjected to coordinate transformation through the sequence of rotation-scaling-translation.
In the application, the 3D hair template is manufactured based on the reference head model, so that the 3D hair template and the reference head model have the same coordinate system, and therefore, the three-dimensional coordinates of the preset 3D hair template in the target coordinate system can be obtained through the first coordinate transformation relation between the reference head model and the 3D head model, that is, the three-dimensional coordinates of all three-dimensional coordinate points included in the 3D hair template point set are transformed into the three-dimensional coordinates corresponding to the 3D hair template coordinate system.
The three-dimensional hair modeling device determines a second coordinate transformation relation between the 3D hair template and the 3D head model after the first coordinate transformation relation is transformed.
Specifically, when the second coordinate transformation relation is determined, the hair root point of the 3D hair template transformed by the first coordinate transformation relation is determined, and the hair root point of the 3D hair template transformed by the first coordinate transformation relation is matched with the scalp layer key point of the 3D head model to obtain a hair root matching point pair.
The 3D hair template can be divided into two models, one is a hair model, and the other model can be called a line model; the other is a non-hair model, which may also be referred to as a body model. The hair model means that the 3D hair template exists in the form of a plurality of hairs, and each hair is composed of a plurality of points. There is one root point for each hair of the hairline model. The non-hair-line model means that the 3D hair template exists in a mode of passing through a plurality of hair areas, namely the 3D hair template can be divided into a plurality of hair areas, and the shape similarity of each hair in one hair area can be considered to be high. For the non-hair-line model, a point where the 3D hair template collides with the scalp layer of the 3D head model may be determined as a hair root point through a collision detection algorithm, and several hair root points of the 3D hair template may be as shown in fig. 3.
In specific implementation, when the hair root in the 3D hair template is matched with the scalp layer key point of the 3D head model to obtain a hair root matching point pair, the hair root in the 3D hair template is matched with the scalp layer key point of the 3D head model by using a nearest neighbor algorithm to obtain a hair root matching point pair. The nearest neighbor algorithm may be any one of algorithms provided in the prior art, and the details of the present application are not repeated herein.
And the three-dimensional hair modeling device determines a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pairs.
And during specific implementation, calculating a second coordinate transformation relation s.R.t between the 3D head model and the 3D head template by using the hair root matching point pairs. The specific calculation method is the same as the above-mentioned manner of calculating the first coordinate transformation relationship, and is not described herein again.
And the three-dimensional hair modeling device registers the 3D hair template and the 3D head model according to the second coordinate transformation relation.
Due to the difference between the 3D head model and the reference head model, after the 3D hair template and the 3D head model are registered through the second coordinate transformation relationship, there may be an erroneous region in the registered 3D hair model. The error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D hair template or an area which is formed by the hair root points in the 3D hair template and covers the non-scalp layer of the 3D head model. Therefore, when it is determined that the scalp layer of the 3D head model is not completely covered by the 3D hair template or the hair root region formed by the hair root points in the 3D hair template covers the non-scalp layer of the 3D head model, the hair in the error region of the 3D hair template is deformed by using the radial basis function RBF until the deformed 3D hair template completely covers the scalp layer of the 3D head model and the hair root region formed by the hair root points in the 3D hair template does not cover the non-scalp layer of the 3D head model.
When determining the error region, the following methods may be used, but not limited to:
and (3) emitting rays from the geometric central point of the 3D head model to each three-dimensional coordinate point forming the 3D hair template by using a ray casting algorithm, wherein if the rays projected to a certain area touch the area of the 3D hair template before touching the 3D head model, the hair in the area in the 3D hair template has wrong shielding relation.
When the 3D hair template is deformed using the radial basis function RBF, the entire deformation, or the deformation from the entire to the local deformation, or the local deformation may be performed for each hair in the error region of the 3D hair template.
The following operations are performed for each hair individually for the deformation from whole to partial, in the implementation. In the present application, the hair a is taken as an example, and other hair may be implemented as the hair a.
The overall deformation comprises:
1) for example, 3 three-dimensional coordinate points a1, a5, a9, a1, a9, and a5 in the hair a shown in fig. 4 are the hair tip points, the hair root points, and the hair root points. Determining the nearest neighbor point of each key point on the 3D head model scalp layer through a nearest neighbor point algorithm as a matching point of each key point, and forming a hair matching point pair by each key point and the corresponding matching point; the matching point of the key point a1 corresponding to the head cortex of the 3D head model is b1, the matching point of the key point a5 corresponding to the head cortex of the 3D head model is b5, and the matching point of the key point a9 corresponding to the head cortex of the 3D head model is b 9. In FIG. 4, B represents the portion of the hair A where a 1-a 9 are matched to the scalp layer of the 3D head model. And using the hair matching point pairs as input parameters of the radial basis functions to deform the hair A.
The local deformation includes:
2) dividing the hair A into at least two parts. Taking the example of dividing the hair a into two parts, as shown in fig. 4, the hair a is divided into two parts, a1 and a 2. At least 3 three-dimensional coordinate points are selected as key points in each section, 3 key points a1, A3 and a5 in A1, and 3 key points a5, a7 and a9 in A2. And then determining the nearest neighbor point of each key point in at least 3 key points in each part as a matching point in the three-dimensional coordinate points of the scalp layer of the 3D head model by using a nearest neighbor point algorithm, and forming a segmented hair matching point pair by each key point and the corresponding matching point. The matching points corresponding to a1, a3 and a5 are b1, b3 and b5, and the matching points corresponding to a5, a7 and a9 are b5, b7 and b 9. The a1 portion of the hair strand a is then deformed with the input parameters as radial basis functions at the piecewise matching points defined by a1, a3, a5 and b1, b3, b 5. The a2 portion of the hair a is then deformed with the input parameters as radial basis functions at the piecewise matched points formed by a5, a7, a9 and b5, b7, b9, as shown in fig. 5. For convenience of description, the numeral 1 in fig. 5 denotes a1, the numeral 2 denotes a2, the numeral 3 denotes a3, and so on.
Global to local deformation includes:
3) the method comprises the steps of 1) carrying out overall deformation on a hair A, then dividing the deformed hair A into at least two parts, and carrying out local deformation on each part through 2).
The radial basis function referred to in the present application is a predefined function, and specifically a function with a good effect can be obtained through a large number of experiments. The radial basis functions used in this application may be
Figure GDA0002251704530000091
And performing deformation on each hair in a deformation mode aiming at the hairline A, then updating the 3D hair template, and determining whether the updated 3D hair template completely covers the scalp layer of the 3D head model and whether a hair root area formed by hair root points in the 3D hair template covers the non-scalp layer of the 3D head model. When it is determined that the updated 3D hair template does not completely cover the scalp layer of the 3D head model or the hair root area formed by the hair root points in the 3D hair template partially covers the non-scalp layer of the 3D head model, repeating the operation of the step 1) or the step 2) or the step 3) on each hair in the error area in the 3D hair template until the updated 3D hair template completely covers the scalp layer of the 3D head model and the hair root area formed by the hair root points in the 3D hair template does not cover the non-scalp layer of the 3D head model.
According to the method, the 3D hair model with the good fitting effect can be quickly constructed for the 3D head model in the terminal with relatively low storage capacity and calculation capacity, the reality of the constructed 3D hair model can be compared favorably with that of the hair model generated by adopting a hair modeling method, and a large amount of manual interaction and operation time is saved; compared with the existing hair model reusing technology, the method and the device are not limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image, and the modeling aiming at the individual part of the hair, so that the reusability of the 3D hair template can be effectively improved, and the constructed hair model can more accurately keep the appearance of the prototype hair.
Referring to fig. 6, a flow chart of a three-dimensional hair modeling method provided by the present application is shown.
S601, determining a first coordinate transformation relation between a 3D head model of a hair to be created and a preset reference head model, determining a second coordinate transformation relation between the 3D head model and a preset 3D hair template, and registering the 3D head model and the 3D hair template based on the first coordinate transformation relation and the second coordinate transformation relation. The above S601 may be referred to as a rigid registration process.
Wherein the preset 3D hair template is matched with the preset reference head model.
S602, when detecting that the 3D hair template after registration has an error area, deforming the hair in the error area of the 3D hair template by using a Radial Basis Function (RBF) so as to correct the error area.
Wherein the error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D head template or an area which is formed by the hair root points in the 3D head template and covers the non-scalp layer of the 3D head model.
Alternatively, the rigid registration process may be implemented as follows, as shown in fig. 7.
S701, determining the face key points and the scalp layer key points of the 3D head model of the hair to be created, and determining the preset face key points of the reference head model.
S702, matching the face key points of the 3D head model with the face key points of the reference head model to obtain face matching point pairs.
S703, determining a first coordinate transformation relation between the 3D head model and a reference head model according to the face matching point pairs.
S704, obtaining the three-dimensional coordinates of the 3D hair template in a target coordinate system according to the first coordinate transformation relation.
And the target coordinate system is a coordinate system where the 3D head model is located.
S705, matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs.
S706, determining a second coordinate transformation relation between the 3D head model and the 3D head template according to the hair root matching point pair.
And S707, registering the 3D hair template and the 3D head model according to the second coordinate transformation relation.
On the basis of the embodiments described in fig. 6 and 7, in the rigid registration process, the first coordinate transformation relationship may include a rotation relationship, a scaling relationship, and a translation relationship. Wherein the first coordinate transformation relationship s.r.t includes a scaling relationship, a rotation relationship, and a translation relationship. Wherein, the scaling relationship is realized by a scaling factor s1, the rotation relationship is realized by a rotation matrix R1, and the translation relationship is realized by a translation vector t 1. The calculation methods of the scaling factor s1, the rotation matrix R1 and the translation vector t1 may refer to the corresponding descriptions above, and are not repeated herein. Referring to fig. 8, a schematic diagram of a three-dimensional hair modeling method provided herein is shown.
First, a 3D head model and a pair of face-matching points of a reference head model are obtained based on steps S701 and S702.
In S703, determining the first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs may be implemented as follows:
and determining a rotation relation, a scaling relation and a translation relation between the 3D head model and a reference head model according to the face matching point pairs.
Specifically, s1, R1, and t1 are calculated from the pairs of face matching points.
In S704, according to the first coordinate transformation relationship, obtaining three-dimensional coordinates of three-dimensional coordinate points of the 3D hair template in a coordinate system where the 3D head model is located, which may be implemented as follows:
and sequentially performing rotation, scaling and translation operations on the three-dimensional coordinate points of the 3D hair template by using the rotation relation, the scaling relation and the translation relation to obtain the three-dimensional coordinates of the 3D hair template in the coordinate system of the 3D head model. I.e. the 3D hair template is transformed into the coordinate system of the 3D head model.
Similarly, the second coordinate transformation relationship may include a rotation relationship, a scaling relationship, and a translation relationship; wherein the first coordinate transformation relationship s.r.t includes a scaling relationship, a rotation relationship, and a translation relationship. Wherein, the scaling relationship is realized by a scaling factor s2, the rotation relationship is realized by a rotation matrix R2, and the translation relationship is realized by a translation vector t 2.
And then obtaining a 3D head model and a hair root matching point pair of the 3D hair template through S705. In S706, determining a second coordinate transformation relationship between the 3D head model and the 3D head template according to the hair root matching point pair may be implemented as follows:
and determining a rotation relation, a scaling relation and a translation relation between the 3D head model and the 3D hair template according to the hair root matching point pairs.
Specifically, s2, R2, and t2 are calculated from the hair root matching point pairs.
In S707, the 3D hair template and the 3D head model are registered according to the second coordinate transformation relationship, which may be implemented as follows:
and sequentially performing rotation, scaling and translation operations on the three-dimensional coordinate points of the 3D hair template by using the rotation relation, the scaling relation and the translation relation, so as to register the 3D hair template with the 3D head model.
Optionally, the radial basis function RBF is used to deform the hair in the error region of the 3D hair template, and specifically, the radial basis function RBF may be used to deform the hair in the error region of the 3D hair template entirely, or from entirely to locally, or locally.
The overall deformation for each hair comprises:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point;
and using the hair matching point pairs as input parameters of the radial basis functions to deform each hair.
Optionally, the first key point includes a root point and a tip point of each hair. Other first keypoints may select other three-dimensional coordinate points in each hair. Specifically, three-dimensional coordinate points near the midpoint of the root point and the tip point may be selected.
The whole-to-local deformation of each hair includes:
a1, respectively executing the following steps for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point; using the hair matching point pairs as input parameters of radial basis functions to deform each hair;
a2, dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Optionally, the second key points include a hair root point and a hair tip point in each hair;
the third key point includes two end points of each part of hair.
The local deformation for each hair includes:
performing the following respectively for each hair in the error region of the 3D hair template:
dividing each hair into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Optionally, the third key point comprises two end points of each part of hair.
The present application also provides a three-dimensional hair modeling apparatus, as shown in fig. 9, including:
a first determining unit 901, configured to determine a first coordinate transformation relationship between a 3D head model of hair to be created and a preset reference head model;
a second determining unit 902, configured to determine a second coordinate transformation relationship between the 3D head model and a preset 3D hair template, where the preset 3D hair template matches with the preset reference head model;
a registration unit 903, configured to register the 3D head model with the 3D hair template based on the first coordinate transformation relationship determined by the first determining unit 901 and the second coordinate transformation relationship determined by the second determining unit 902;
a detecting unit 904, configured to detect whether an error region exists in the registered 3D hair template;
a deformation unit 905, configured to, when the detection unit 904 detects an error region, use a radial basis function RBF to be right the detection unit 904 detects that hair in the error region of the 3D hair template is deformed to correct the error region, where the error region includes that the region of the scalp layer of the 3D head model is not completely blocked in the 3D hair template or the region of the non-scalp layer of the 3D head model is blocked by a hair root region formed by hair root points in the 3D hair template.
According to the method, the 3D hair model with the good fitting effect can be quickly constructed for the 3D head model in the terminal with relatively low storage capacity and calculation capacity, the reality of the constructed 3D hair model can be compared favorably with that of the hair model generated by adopting a hair modeling method, and a large amount of manual interaction and operation time is saved; compared with the existing hair model reusing technology, the method and the device are not limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image, and the modeling aiming at the individual part of the hair, so that the reusability of the 3D hair template can be effectively improved, and the constructed hair model can more accurately keep the appearance of the prototype hair.
Optionally, the first determining unit 901 is specifically configured to:
determining facial key points and scalp layer key points of a 3D head model of hair to be created; determining face key points of a preset reference head model; matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs; determining a first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs;
the registration unit 903 is configured to obtain a three-dimensional coordinate of the 3D hair template in a target coordinate system according to the first coordinate transformation relationship, where the target coordinate system is a coordinate system where the 3D head model is located;
the second determining unit 902 is specifically configured to: matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs; determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair;
the registration unit 903 is configured to register the 3D hair template with the 3D head model according to the second coordinate transformation relationship.
In a possible implementation manner, the deformation unit 905 is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point;
and taking the hair matching point pairs as input parameters of the radial basis function, and carrying out deformation on each hair.
Optionally, the first key points include a root point and a tip point of each hair.
In a possible implementation manner, the deformation unit 905 is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point;
using the hair matching point pairs as input parameters of radial basis functions to deform each hair; dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Optionally, the second key points may include a root point and a tip point of each hair; the third key point comprises two end points of each part of hair.
In a possible implementation manner, the deformation unit 905 is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template: dividing each hair into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Alternatively, the third key point may include both end points of each of the portions of hair.
The division of the units in this application is schematic, and only one logical function division is used, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of this application may be integrated in one processor, may also exist alone physically, or may also be integrated in one unit by two or more units. The integrated unit can be realized in a form of hardware or a form of a software functional module.
When the integrated unit may be implemented in a form of hardware, the first determining unit 901, the second determining unit 902, the registering unit 903, the detecting unit 904, and the deforming unit 905 may be a processor 1001 in the physical hardware of the three-dimensional hair modeling apparatus, as shown in fig. 10. A memory 1002 may also be included in the three-dimensional hair modeling apparatus for storing program code executed by the processor 1001.
The memory 1002 may be a volatile memory (RAM), such as a random-access memory (RAM); the memory may also be a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory may be a combination of the above.
The three-dimensional hair modeling device may further include a communication interface 1003, configured to configure a 3D head model of a hair to be created, a preset reference head model, and a preset 3D hair template, where the preset reference head model and the preset 3D hair template are matched. Where the configuration is completed and saved in memory 1002.
The three-dimensional hair modeling apparatus may further include a display 1004 for displaying the 3D head model of the hair to be created, the preset reference head model, and the preset 3D hair template.
The processor 1001, the memory 1002, the communication interface 1003, and the display 1004 may be connected via a bus 1005. The connection between other components is merely illustrative and not intended to be limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The processor 1001 is configured to execute the program code stored in the memory 1002, and specifically performs the following operations:
determining a first coordinate transformation relation between a 3D head model of a hair to be created and a preset reference head model, determining a second coordinate transformation relation between the 3D head model and a preset 3D hair template, and registering the 3D head model and the 3D hair template based on the first coordinate transformation relation and the second coordinate transformation relation. The above S601 may be referred to as a rigid registration process.
Wherein the preset 3D hair template is matched with the preset reference head model.
And when detecting that the 3D hair template after registration has an error area, deforming the hair in the error area of the 3D hair template by using a Radial Basis Function (RBF) so as to correct the error area.
Wherein the error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D head template or an area which is formed by the hair root points in the 3D head template and covers the non-scalp layer of the 3D head model.
The processor 1001, when performing the rigid registration procedure operation, may be implemented as follows:
determining facial key points and scalp layer key points of a 3D head model of hair to be created, and determining preset facial key points of a reference head model; matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs; determining a first coordinate transformation relationship between the 3D head model and a reference head model according to the face matching point pairs; obtaining the three-dimensional coordinates of the 3D hair template in a target coordinate system according to the first coordinate transformation relation; wherein the target coordinate system is a coordinate system where the 3D head model is located; matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs; determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair; and registering the 3D hair template and the 3D head model according to the second coordinate transformation relation.
When the facial key points and the scalp layer key points of the 3D head model of the hair to be created are determined and the preset facial key points of the reference head model are determined as described above, the selected facial key points and the scalp layer key points may be input by the user by selecting the facial key points and the scalp layer key points in the 3D head model displayed on the display 1004, and then the processor 1001 determines the facial key points and the scalp layer key points of the 3D head model of the hair to be created after receiving the user input. The same can be said for the determination of the facial keypoints of the preset reference head model.
Optionally, the processor 1001 deforms the hair in the error region of the 3D hair template using a radial basis function RBF, and specifically, may deform the hair in the error region of the 3D hair template entirely, or from entirely to locally, or locally using the radial basis function RBF.
The overall deformation for each hair comprises:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point;
and using the hair matching point pairs as input parameters of the radial basis functions to deform each hair.
Optionally, the first key point includes a root point and a tip point of each hair. Other first keypoints may select other three-dimensional coordinate points in each hair. Specifically, three-dimensional coordinate points near the midpoint of the root point and the tip point may be selected.
The whole-to-local deformation of each hair includes:
performing the following respectively for each hair in the error region of the 3D hair template: selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point; using the hair matching point pairs as input parameters of radial basis functions to deform each hair;
dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Optionally, the second key points include a hair root point and a hair tip point in each hair;
the third key point includes two end points of each part of hair.
The local deformation for each hair includes:
performing the following respectively for each hair in the error region of the 3D hair template: dividing each hair into at least two parts, and respectively executing the following steps for each part of hair: taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point; and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
Optionally, the third key point comprises two end points of each part of hair.
As shown in fig. 11, the rigid matching of the pre-constructed 3D head model with the pre-stored 3D hair template is done first in this application. After the rigid matching operation is completed, the 3D hair template is aligned with the scalp layer of the 3D head model, and a part of the area may have a wrong occlusion relation, such as the area marked by the ring in fig. 11. And then guiding the 3D hair model to deform from the whole to the local by using key points of the head cortex of the 3D head model. And the deformation can eliminate wrong shielding relation, and a 3D hair template which has a perfect fitting effect with the 3D head model is obtained.
According to the method, the 3D hair model with the good fitting effect can be quickly constructed for the 3D head model in the terminal with relatively low storage capacity and calculation capacity, the reality of the constructed 3D hair model can be compared favorably with that of the hair model generated by adopting a hair modeling method, and a large amount of manual interaction and operation time is saved; compared with the existing hair model reusing technology, the method and the device are not limited by the detection result of the 2D key points, the matching precision of the 2D key points and the common part and the individual part of the front-view hair image, and the modeling aiming at the individual part of the hair, so that the reusability of the 3D hair template can be effectively improved, and the constructed hair model can more accurately keep the appearance of the prototype hair.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A method of three-dimensional hair modeling, comprising:
determining facial key points and scalp layer key points of a 3D head model of hair to be created;
determining face key points of a preset reference head model;
matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs;
determining a first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs;
obtaining a three-dimensional coordinate of a preset 3D hair template in a target coordinate system according to the first coordinate transformation relation, wherein the preset 3D hair template is matched with the preset reference head model; the target coordinate system is a coordinate system where the 3D head model is located;
matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs;
determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair;
registering the 3D hair template with the 3D head model according to the second coordinate transformation relation;
when the 3D hair template after registration is detected to have an error area, deforming the hair in the error area of the 3D hair template by using a Radial Basis Function (RBF) so as to correct the error area, wherein the error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D hair template or an area which is not the scalp layer of the 3D head model and is covered by a hair root area formed by hair root points in the 3D hair template.
2. The method of claim 1, wherein said deforming hair in the erroneous region of the 3D hair template using a radial basis function, RBF, comprises:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point;
and taking the hair matching point pairs as input parameters of the radial basis function, and carrying out deformation on each hair.
3. The method of claim 2, wherein said first keypoints comprise a root point and a tip point of each of said hairs.
4. The method of claim 1, wherein said deforming hair in the erroneous region of the 3D hair template using a radial basis function, RBF, comprises:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point;
using the hair matching point pairs as input parameters of radial basis functions to deform each hair;
dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point;
and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
5. The method of claim 4, wherein said second key points comprise a root point and a tip point in each of said hairs;
the third key point comprises two end points of each part of hair.
6. The method of claim 1, wherein said deforming hair in the erroneous region of the 3D hair template using a radial basis function, RBF, comprises:
performing the following respectively for each hair in the error region of the 3D hair template:
dividing each hair into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point;
and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
7. The method of claim 6, wherein said fourth keypoints comprise two end points of each of said portions of hair.
8. A three-dimensional hair modeling apparatus, comprising:
a first determining unit for determining facial key points and scalp layer key points of a 3D head model of hair to be created; determining face key points of a preset reference head model; matching the facial key points of the 3D head model with the facial key points of the reference head model to obtain facial matching point pairs; determining a first coordinate transformation relationship between the 3D head model and the reference head model according to the face matching point pairs;
the registration unit is used for obtaining the three-dimensional coordinates of the preset 3D hair template in a target coordinate system according to the first coordinate transformation relation, wherein the preset 3D hair template is matched with the preset reference head model; the target coordinate system is a coordinate system where the 3D head model is located;
the second determining unit is used for matching the hair root points of the 3D hair template after the transformation of the first coordinate transformation relation with the scalp layer key points of the 3D head model to obtain hair root matching point pairs; determining a second coordinate transformation relation between the 3D head model and the 3D hair template according to the hair root matching point pair;
the registration unit is further configured to register the 3D hair template with the 3D head model according to the second coordinate transformation relationship;
the detection unit is used for detecting whether the 3D hair template after registration has an error area;
and the deformation unit is used for deforming the hair in the error area of the 3D hair template detected by the detection unit by using a Radial Basis Function (RBF) so as to correct the error area, wherein the error area comprises an area which does not completely cover the scalp layer of the 3D head model in the 3D hair template or an area which is not the scalp layer of the 3D head model and is covered by a hair root area formed by hair root points in the 3D hair template.
9. The apparatus according to claim 8, wherein the shape-changing unit is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as first key points, determining the nearest neighbor point of each first key point on the scalp layer of the 3D head model as a matching point of each first key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each first key point and the corresponding matching point;
and taking the hair matching point pairs as input parameters of the radial basis function, and carrying out deformation on each hair.
10. The device of claim 9, wherein said first key points comprise a root point and a tip point of each of said hairs.
11. The apparatus according to claim 8, wherein the shape-changing unit is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template:
selecting at least 3 three-dimensional coordinate points in each hair as second key points, determining the nearest neighbor point of each second key point in the 3D head model scalp layer in the at least 3 second key points as the matching point of each second key point through a nearest neighbor point algorithm, and forming a hair matching point pair by each second key point and the corresponding matching point;
using the hair matching point pairs as input parameters of radial basis functions to deform each hair;
dividing each hair after deformation into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as third key points, taking the nearest neighbor point of each third key point in each part of hair as a matching point of each third key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each third key point and the corresponding matching point;
and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
12. The device of claim 11, wherein said second key points comprise a root point and a tip point in each of said hairs;
the third key point comprises two end points of each part of hair.
13. The apparatus according to claim 8, wherein the shape-changing unit is specifically configured to:
performing the following respectively for each hair in the error region of the 3D hair template:
dividing each hair into at least two parts, and respectively executing the following steps for each part of hair:
taking at least 3 three-dimensional coordinate points in each part of hair as fourth key points, taking the nearest neighbor point of each fourth key point in each part of hair as a matching point of each fourth key point in the three-dimensional coordinate points of the scalp layer of the 3D head model, and forming a segmented hair matching point pair by each fourth key point and the corresponding matching point;
and deforming each part of hair by taking the segmented hair matching point pairs as input parameters of a radial basis function.
14. The apparatus of claim 13, wherein said fourth keypoints comprise two end points of each said portion of hair.
15. A three-dimensional hair modeling apparatus, comprising:
a communication interface, a memory, and a processor;
the communication interface is used for configuring a 3D head model of the hair to be created, a preset reference head model and a preset 3D hair template, and the preset reference head model is matched with the preset 3D hair template;
the memory is used for storing program codes executed by the processor;
the processor is configured to execute the program code stored in the memory, and in particular to perform the method of any one of claims 1 to 7.
CN201680025609.1A 2016-04-28 2016-04-28 Three-dimensional hair modeling method and device Active CN107615337B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/080511 WO2017185301A1 (en) 2016-04-28 2016-04-28 Three-dimensional hair modelling method and device

Publications (2)

Publication Number Publication Date
CN107615337A CN107615337A (en) 2018-01-19
CN107615337B true CN107615337B (en) 2020-08-25

Family

ID=60161663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680025609.1A Active CN107615337B (en) 2016-04-28 2016-04-28 Three-dimensional hair modeling method and device

Country Status (2)

Country Link
CN (1) CN107615337B (en)
WO (1) WO2017185301A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202597A (en) * 2021-12-07 2022-03-18 北京百度网讯科技有限公司 Image processing method and apparatus, device, medium, and product

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002553B (en) * 2018-08-08 2021-10-01 北京旷视科技有限公司 Method and device for constructing hair model, electronic equipment and computer readable medium
CN109408653B (en) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation
CN112426716B (en) * 2020-11-26 2024-08-13 网易(杭州)网络有限公司 Three-dimensional hair model processing method, device, equipment and storage medium
CN112419487B (en) * 2020-12-02 2023-08-22 网易(杭州)网络有限公司 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
CN113713387B (en) * 2021-08-27 2024-08-09 网易(杭州)网络有限公司 Virtual hair model rendering method, device, equipment and storage medium
CN114373057B (en) * 2021-12-22 2024-08-06 聚好看科技股份有限公司 Method and equipment for matching hair with head model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236911A (en) * 2010-03-17 2011-11-09 卡西欧计算机株式会社 3d modeling apparatus and 3d modeling method
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344373A (en) * 2008-08-14 2009-01-14 中国人民解放军总后勤部军需装备研究所 Standardization processing method based on three-dimensional head and face curved surface modeling
US9317970B2 (en) * 2010-01-18 2016-04-19 Disney Enterprises, Inc. Coupled reconstruction of hair and skin
CN102800129B (en) * 2012-06-20 2015-09-30 浙江大学 A kind of scalp electroacupuncture based on single image and portrait edit methods
CN103035030B (en) * 2012-12-10 2015-06-17 西北大学 Hair model modeling method
CN103606186B (en) * 2013-02-02 2016-03-30 浙江大学 The virtual hair style modeling method of a kind of image and video
CN103366400B (en) * 2013-07-24 2017-09-12 深圳市华创振新科技发展有限公司 A kind of three-dimensional head portrait automatic generation method
CN103955962B (en) * 2014-04-21 2018-03-09 华为软件技术有限公司 A kind of device and method of virtual human hair's generation
CN105405163B (en) * 2015-12-28 2017-12-15 北京航空航天大学 A kind of static scalp electroacupuncture method true to nature based on multi-direction field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236911A (en) * 2010-03-17 2011-11-09 卡西欧计算机株式会社 3d modeling apparatus and 3d modeling method
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202597A (en) * 2021-12-07 2022-03-18 北京百度网讯科技有限公司 Image processing method and apparatus, device, medium, and product
CN114202597B (en) * 2021-12-07 2023-02-03 北京百度网讯科技有限公司 Image processing method and apparatus, device, medium and product

Also Published As

Publication number Publication date
CN107615337A (en) 2018-01-19
WO2017185301A1 (en) 2017-11-02

Similar Documents

Publication Publication Date Title
CN107615337B (en) Three-dimensional hair modeling method and device
US11055906B2 (en) Method, device and computing device of face image fusion
CN112419487B (en) Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
US9202312B1 (en) Hair simulation method
EP3912138A1 (en) Systems and methods for photorealistic real-time portrait animation
CN107392984A (en) A kind of method and computing device based on Face image synthesis animation
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
KR101430122B1 (en) System, method and computer readable recording medium for simulating hair style
KR102689515B1 (en) Methods and apparatus, electronic devices and storage media for processing facial information
CN112581518B (en) Eyeball registration method, device, server and medium based on three-dimensional cartoon model
CN112862807B (en) Hair image-based data processing method and device
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
TWI763205B (en) Method and apparatus for key point detection, electronic device, and storage medium
WO2017054652A1 (en) Method and apparatus for positioning key point of image
CN113870420A (en) Three-dimensional face model reconstruction method and device, storage medium and computer equipment
CN112184852A (en) Auxiliary drawing method and device based on virtual imaging, storage medium and electronic device
CN106203304B (en) Image generation method and mobile terminal thereof
WO2024140081A1 (en) Method and apparatus for processing facial image, and computer device and storage medium
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
KR101508161B1 (en) Virtual fitting apparatus and method using digital surrogate
CN109754467A (en) Three-dimensional face construction method, computer storage medium and computer equipment
CN114926324A (en) Virtual fitting model training method based on real character image, virtual fitting method, device and equipment
TWI728037B (en) Method and device for positioning key points of image
Vanakittistien et al. 3D hair model from small set of images
Marinescu et al. A versatile 3d face reconstruction from multiple images for face shape classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant