CN113269822B - Person hair style portrait reconstruction method and system for 3D printing - Google Patents

Person hair style portrait reconstruction method and system for 3D printing Download PDF

Info

Publication number
CN113269822B
CN113269822B CN202110558375.XA CN202110558375A CN113269822B CN 113269822 B CN113269822 B CN 113269822B CN 202110558375 A CN202110558375 A CN 202110558375A CN 113269822 B CN113269822 B CN 113269822B
Authority
CN
China
Prior art keywords
model
texture
hairstyle
hair
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110558375.XA
Other languages
Chinese (zh)
Other versions
CN113269822A (en
Inventor
吕琳
陈明海
陈瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110558375.XA priority Critical patent/CN113269822B/en
Publication of CN113269822A publication Critical patent/CN113269822A/en
Application granted granted Critical
Publication of CN113269822B publication Critical patent/CN113269822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention provides a person hair style portrait reconstruction method and system for 3D printing, which are used for acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up; extracting a two-dimensional hairstyle outline from the obtained target portrait photo; matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database; synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair; binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture; registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a figure portrait model for 3D printing; the method has the advantages that a hairstyle model which is used for 3D printing, overall shape reduction and fine geometric details can be obtained, hairstyle input of various hair colors of various styles is allowed, and robustness is high.

Description

Person hair style portrait reconstruction method and system for 3D printing
Technical Field
The disclosure relates to the technical field of 3D printing, in particular to a person hair style portrait reconstruction method and system for 3D printing.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The figure sculpture is a classic and ambitious artistic form with strong decoration and practicability, takes figures as modeling, uses various materials to create visual and tangible artistic figure images, and is popular with the general public since ancient times. Due to the extremely high technological creation difficulty, the personalized customized sculpture is difficult to enter common families all the time. In recent years, with rapid development of human body three-dimensional scanning reconstruction and popularization of 3D printing technology, more and more individual users are in contact with and willing to try to print customized figures in 3D, and the figures printed in 3D have artistic expression similar to sculpture, but are faster, simpler and more personalized than sculpture, and are suitable for any common users.
Although the development of the scanning reconstruction technology is quite mature up to now, for the reconstruction of the portrait of a person, the direct scanning reconstruction still cannot obtain the ideal result, especially for the hair area, and further algorithm processing is needed. Both academia and industry have done a lot of excellent work on human portrait printing, but many of the work only uses rough models or generalized templates for hair style reconstruction, and cannot recover the real features of the target hair style from details. In the aspect of portrait feature expression, the hairstyle plays an indispensable role, and the hairstyle reconstruction must be done for personalized customization. There is indeed a significant contribution from work on printable hairstyle models that achieve high precision geometric details, however, they all require high-end hardware and robust computing power to support, limiting the use of ordinary users.
The inventor finds that at present, a plurality of mature rendering-oriented hair style modeling methods exist, which model the hair style into a belt-shaped or linear hair bundle and cannot meet the constraint of 3D printing; moreover, the existing manufacturing-oriented hair style modeling work often cannot generate a friendly result for the input of any hair style color, and has a plurality of limitations. The reason for this is that the stylization cannot achieve the desired result due to the nature of the black hair. Black hair, especially short black hair, presents great difficulties in reconstruction due to its high complexity, high density, high randomness, and poor sensitivity.
Disclosure of Invention
In order to solve the defects of the prior art, the present disclosure provides a person hair style portrait reconstruction method and system for 3D printing, which can obtain a hair style model for 3D printing, overall shape restoration, and fine geometric details; no need for a strict and expensive hardware environment, complex inputs and large computational costs; the input of the hair style of each color of each style is allowed, and the robustness is high.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
the first aspect of the disclosure provides a person hair style portrait reconstruction method for 3D printing.
A person's hair style portrait reconstruction method for 3D printing, comprising the processes of:
acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up;
extracting a two-dimensional hairstyle outline from the obtained target portrait photo;
matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database;
synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair;
binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture;
and registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a portrait model for 3D printing.
Further, the construction of the three-dimensional rough hairstyle template model database comprises the following steps:
summarizing various rough template hairstyles and establishing a corresponding three-dimensional template model database;
acquiring a front view and a side view of each hair style template, and converting the front view and the side view into binary views;
carrying out hairstyle space parameterization on each hairstyle template, and mapping all vertexes of the hairstyle template to the virtual unit ball;
determining the correct texture coordinate of each hair style template by utilizing the hair style space;
each hairstyle template is divided into a plurality of parts according to the growth tendency and the structural characteristics of the hair of each part by utilizing the hairstyle space.
Further, extracting a two-dimensional hair style contour from the obtained target portrait photo, including:
sparsifying and sampling the portrait picture;
the contrast of the portrait photo is enhanced, and the significance of a hairstyle area in the photo is enhanced;
converting the portrait photograph to a Lab color space;
performing k-means clustering on the portrait photos, and extracting a hair region;
and performing one-time opening operation on the clustering result, and taking the maximum connected domain in the center of the clustering result image as a two-dimensional hairstyle contour.
Further, performing rough hairstyle template matching, comprising:
and for each hair style template model in the database, calculating a pixel difference value and a Hu's invariant moment difference value of the two-dimensional hair style outline and the template model outline, weighting and summing the two difference values to obtain a total score, and taking the hair style template with the minimum total score as a best matching template.
Further, synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair, wherein the method comprises the following steps:
manually intercepting a hair sample from the input hair close-up;
calculating a hairline, scanning line by line from a first line of pixels, greedily searching a path reaching the lower boundary of the image for each pixel according to the minimum difference value of the gray values between adjacent pixels, reasonably truncating the paths by using a common threshold value, and obtaining the hairline in the image when the gray value difference values of the pixels and all the adjacent pixels are greater than the maximum difference value;
and (3) adopting an improved Image Quilting synthesis method to synthesize two-dimensional hair textures, limiting the block size through the maximum hairline length, and limiting the overlapping width between the blocks through the block size.
Further, binding the two-dimensional hair texture to the matched hair style template model, and generating a geometric texture according to the texture color information of each vertex, wherein the method comprises the following steps:
binding the two-dimensional hair texture to the matched template model, and enabling each vertex to have corresponding color information;
converting the two-dimensional hair texture into an HSV color space;
generating a geometric texture on the template model according to the texture color information, normalizing a lightness channel V (x) of the two-dimensional hairstyle texture, and converting the lightness channel V (x) into a displacement graph, wherein the displacement graph defines the offset of each vertex; the vertex displacement direction is determined by the normal direction of each vertex, and the geometric texture detail expanded from the two-dimensional texture to the three-dimensional texture is generated according to the product of the displacement graph and the normal direction.
Further, the processing of the acquired geometric texture includes:
the displacement graph is adjusted by using the length field, and the normal direction of each vertex is adjusted by using the direction field;
performing rectangular uniform sampling on a hairline streamline graph of the two-dimensional hair texture: drawing a rectangle bounding box with a given width for each streamline in an initial hairline diagram, performing collision detection on each newly drawn rectangle with an existing rectangle, if collision is not generated at all, reserving the rectangle, and when the distance between two adjacent hairline streamlines is smaller than a preset value, colliding the drawn rectangles, and then eliminating the drawn rectangles;
abstracting the hair line graph after uniform sampling, eliminating sawtooth burrs of the graph to a certain degree, and simultaneously thickening each line with a certain width;
calculating a corresponding special displacement diagram according to a gray channel of the abstracted hairline sampling diagram;
given a weight, a weighted sum of the displacement map and the special displacement map is calculated, and the calculation result is used as a new displacement map.
Further, repairing the obtained initial reconstruction model includes:
the method comprises the steps of defining a hairstyle area of an initial reconstruction model, automatically defining a hair area according to color information of each vertex, clustering texture files of the initial reconstruction model, defining a hairstyle area in a two-dimensional texture, and obtaining a 3D hairstyle area;
giving an optical head template portrait model, giving the spatial positions of eyes, mouth and nose on the optical head template, and giving the coordinates of the optical head template and the target initial model;
positioning eyes, mouths and noses on the two-dimensional textures of the initial model so as to find the spatial positions of corresponding senses on the corresponding three-dimensional model;
registering the optical head template to the target initial model according to the position relation of the optical head template and each sense organ of the target initial model;
the overlap of the head template and the target initial model is removed.
Further, registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model, wherein the registering comprises:
aligning the obtained hairstyle template with the geometric texture details with the target model by using an ICP (inductively coupled plasma) algorithm;
after rigid registration, independently scaling the X coordinate, the Y coordinate and the Z coordinate of the hairstyle template model respectively, and adjusting the size of the model to enable the shape of the hairstyle template to preliminarily adapt to the head of the target model;
the vertex on the boundary of the hair style template model is moved to the position of its nearest neighbor point on the target model head.
A second aspect of the present disclosure provides a person's hair style portrait reconstruction system for 3D printing.
A person's hair style portrait reconstruction system for 3D printing, comprising:
a data acquisition module configured to: acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up;
a two-dimensional hairstyle contour extraction module configured to: extracting a two-dimensional hairstyle outline from the obtained target portrait photo;
a hair style template matching module configured to: matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database;
a two-dimensional hair texture generation module configured to: synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair;
a geometric texture generation module configured to: binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture;
a portrait model generation module configured to: and registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a portrait model for 3D printing.
A third aspect of the present disclosure provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the steps in the person's hair style portrait reconstruction method for 3D printing as described in the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the steps in the method for reconstructing a person's hair style portrait for 3D printing according to the first aspect of the present disclosure when executing the program.
Compared with the prior art, the beneficial effect of this disclosure is:
1. the methods, systems, media, or electronic devices described in this disclosure enable the acquisition of 3D printable, overall shape-reduced, geometrically fine-detail hair style models without the need for a rigorous and expensive hardware environment, complex inputs, and large computational costs.
2. The methods, systems, media or electronic devices described in this disclosure allow for hair style input for each hair color of each style, and can output ideal results, with strong robustness, and provide a new direction to build 3D printable hair style model databases.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a flowchart of a person hair style portrait reconstruction method for 3D printing according to embodiment 1 of the present disclosure.
Fig. 2 is a schematic diagram illustrating the division of a hair style template provided in embodiment 1 of the present disclosure.
Fig. 3 is a schematic diagram of a two-dimensional hair style contour extraction provided in embodiment 1 of the present disclosure.
Fig. 4 is a schematic diagram of a hairline flow provided in example 1 of the present disclosure.
Fig. 5 is a schematic diagram of a length field and a height field provided in embodiment 1 of the present disclosure.
Fig. 6 is a detailed control schematic provided in embodiment 1 of the present disclosure.
Fig. 7 is a schematic view of head repair provided in embodiment 1 of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
Example 1:
as shown in fig. 1 to 7, embodiment 1 of the present disclosure provides a person hair style portrait reconstruction method for 3D printing, including the following processes:
s1: constructing a three-dimensional rough hairstyle template model database;
s2: inputting a target portrait photo. Comprises two portrait of half body and one close-up of hair;
s3: extracting a two-dimensional hairstyle contour from the input picture;
s4: carrying out rough hairstyle template matching;
s5: synthesizing a two-dimensional hair texture by using an improved texture synthesis method according to the input hair features;
s6: binding the two-dimensional texture to the matched hairstyle template model, and generating a geometric texture according to the texture color information of each vertex;
s7: the geometric texture is further adjusted. Calculating a length field and a direction field and controlling the level of detail;
s8: inputting an initial reconstruction model of the target. In the embodiment, the initial model is not strictly required, and consumer-level scanning equipment is used;
s9: performing primary restoration on a hair region of the target initial reconstruction model;
s10: and registering the hairstyle template model with the generated geometric texture to the restored reconstructed model to obtain a printable portrait model of the person.
Next, each step in the present embodiment is specifically described:
s1 mainly includes the steps of:
s1-1: and summarizing 162 rough template hair styles, and manually establishing a corresponding three-dimensional template model database. According to observation, most common hairstyles can be derived from a limited number of template hairstyles;
s1-2: acquiring a front view and a side view of each hair style template, and converting the front view and the side view into a binary image for matching calculation in the subsequent step;
s1-3: and carrying out hairstyle space parameterization on each hairstyle template. It is believed that the scalp where hair grows resembles a hemisphere, i.e., a virtual unit sphere can be used to represent the entire head model. Mapping all vertexes of the hairstyle template to the virtual unit ball, namely parameterizing a hairstyle space;
s1-4: determining the correct texture coordinate of each hair style template by utilizing the hair style space;
s1-5: each hairstyle template is divided into four parts according to the growth tendency and the structural characteristics of the hair of each part by utilizing the hairstyle space, as shown in fig. 2.
S2 mainly includes the steps of:
photo capture was performed using a general SLR camera. The two portrait photographs are respectively in a front view and a side rear view, are half-length images and completely contain the hairstyle.
S3 mainly includes the steps of:
s3-1: carrying out sparse sampling on the portrait photo, reducing the magnitude of the pixel order of the photo, and simultaneously ensuring that main information is not lost, as shown in (a) in fig. 3;
s3-2: enhancing the contrast of the portrait photograph and enhancing the significance of the hairstyle region in the photograph, as shown in (b) of fig. 3;
s3-3: the portrait photograph was converted to Lab color space. The Lab color space color gamut is larger than the color gamut of a computer display and human vision, and is suitable for clustering which takes colors as main information;
s3-4: performing k-means clustering on the portrait photos, and extracting a hair region;
s3-5: and performing an opening operation on the clustering result. The opening operation can cut off fine adhesion among all the areas in the clustering result, so that a clean and complete hair area is obtained. And (d) taking the maximum connected domain in the center of the clustering result image, namely the hair region, as shown in fig. 3.
S4 mainly includes the steps of:
and for each hair style template model in the database, calculating a pixel difference value and a Hu's invariant moment difference value of the target hair style outline and the template model outline, and weighting and summing the two difference values to obtain a total score. This process is applied to both the front and back side contours. And taking the hairstyle template with the minimum total score as the best matching template.
S5 mainly includes the steps of:
s5-1: manually intercepting a hair sample from the input hair close-up;
s5-2: the hairline flow is calculated as shown in fig. 4.
In this example, the hair stream lines synthesize the guided two-dimensional hair texture. Scanning the hair close-up sample line by line from a first line of pixels, and searching a path of each pixel to the lower boundary of the image greedily according to the minimum difference value of the gray values between adjacent pixels;
and then reasonably truncating the paths by using a common threshold, and approximating to obtain a hairline in the image when the gray value difference value of the pixel and all the adjacent pixels is larger than the maximum difference value gamma. To avoid cross-overs between hair lines, the method provides that each pixel can only appear in one line.
The method presupposes that the overall hair direction is from top to bottom in all the input target hair features. Thereafter, a threshold τ is defined to filter those hairline lines whose length does not meet the shortest requirement. In the present embodiment, it is determined from general experience that γ is 10 and τ is 10;
s5-3: and synthesizing two-dimensional hair textures.
In this embodiment, the Image Quilting synthesis method is improved, so that the method is more suitable for hair texture synthesis.
The basic steps of Image Quilting are as follows: a small block is intercepted and selected from the existing texture sample and is overlapped with the old block, certain overlapping constraint needs to be met between the overlapped neighbors, then an error surface between the new block and the old block is calculated in an overlapping area, a minimum cost path (minimum cost path) of the error surface is found, and the minimum cost path is taken as the boundary of the two blocks. And repeating the synthesis steps until the whole texture is synthesized. In this method, two parameters need to be controlled, namely the size of the block S and the width of the overlap W between blocks. For textures with different attributes, different parameters need to be set to obtain a proper synthesis result. The embodiment improves the method by utilizing the hair stream, so that the method can automatically carry out parameter setting on different hair style samples and synthesize the two-dimensional texture which accords with the hair style characteristics.
The maximum hairline streamline length beta is used as an adaptive parameter for improving the Image Quilting, and the method can be helped to find the maximum hairline length betaThe appropriate block size. When the block size is limited below the length of the hairline, the synthesized hairstyle texture can still keep the length characteristic of the target real hairstyle, and particularly for short hair, the length characteristic of the short hair is easily damaged by a general texture synthesis algorithm. In addition, the overlap width W between blocks is empirically set to the block size
Figure BDA0003078007190000111
S6 mainly includes the steps of:
s6-1: binding the two-dimensional hair texture to the matched template model, and enabling each vertex to have corresponding color information;
s6-2: converting the two-dimensional hair texture into an HSV color space;
s6-3: and generating a geometric texture on the template model according to the texture color information. Normalizing the lightness channels V (x) to [0,1] of the two-dimensional hairstyle texture and converting it into a displacement map d (x), having d (x) ═ Φ V (tc (x));
where φ is an empirical parameter based on artistic aesthetic considerations, to adjust the strength of the detail, typically between [0.25,0.75 ]; and TC (x) is the texture coordinate corresponding to each vertex on the hairstyle model.
The displacement map d (x) defines the offset of each vertex. In addition, the vertex displacement direction is determined by the normal n (x) of each vertex. According to D (x) N (x), the geometric texture detail expanded from two-dimensional texture to three-dimensional texture can be generated.
S7 mainly includes the steps of:
s7-1: growth trends and significance of geometric detail expression are enhanced by length and direction fields. The length field will adjust for d (x), while the direction field will contribute to n (x).
The present embodiment sets these two fields based on some principles such as shape features of the hair style template model, overall trend of the two-dimensional hair style texture, and experience. Taking the hair style shown in fig. 5 as an example, the hair style is divided into four parts, namely, the top of the head, the back of the head, and the left and right sides in the hair style space. As is known from life experience, in this hairstyle, the hair on the sides and back of the head near the neck is significantly shorter than in other parts, while the hair on the top of the head is almost always the longest. Thus, for such hairstyles, a field of such lengths is constructed: the head top part is kept at 1 and gradually decreases towards the two sides and the back of the brain and neck.
On the other hand, the construction of the directional field is not identical for different hairstyles. According to the scientific knowledge, almost all hairs on both sides and the hindbrain of the hairstyle are significantly drooped due to gravity, so that the directional field should give them a downward growth direction. While the crown exists in different styles for different styles. For a flat hairstyle, the tendency for the top of the head to grow is generally upward and slightly forward; for a back hairstyle, the top of the head is combed flat backwards, so the direction is backwards; for the side-splitting hairstyle of pseudo-ginseng, the hair at the top of the head is combed to one side;
s7-2: the hairline profile of the two-dimensional hair texture is uniformly sampled. In order to adapt to different 3D printing accuracies, the present embodiment adds a hairline profile to the displacement map calculation, thereby controlling the level of detail.
A special rectangular sampling method is proposed here. For each streamline in the initial hair streamline graph, the method draws a rectangular bounding box with a given width w, and for each newly drawn rectangle, collision detection is carried out on the newly drawn rectangle and the existing rectangle. This rectangle remains if no collision is generated at all. When two adjacent hairline streamlines are close to each other, drawn rectangles are collided inevitably, and then drawn rectangles are eliminated. Thus, the hairline is uniformly sampled from the rectangles;
s7-3: and abstracting the hair line graph after uniform sampling. Sawtooth burrs of the graph are eliminated to a certain degree, and meanwhile, each streamline is thickened by a certain width, so that the calculation of the displacement graph is convenient to add;
s7-4: calculating a corresponding special displacement diagram D according to the gray channel of the abstracted hairline sampling diagram0(x);
S7-5: given a weight α, calculating D (x) and D0(x) Is calculated as a weighted sum of the results Dnew(x) Will be a new displacement map with Dnew(x)=αD(x)+(1-α)D0(x)。
Wherein, the smaller the rectangular width w is, the more precise the geometric details are, otherwise, the coarser the geometric details are; the smaller the weight α, the smaller the influence of the hairline profile and vice versa, as shown in fig. 6.
S8 mainly includes the steps of:
and scanning to obtain an initial reconstruction model of the target, wherein the reconstruction model only needs a consumer-level scanning device and meets the requirement of closed manifold.
S9 mainly includes the steps of:
s9-1: and delimiting a hairstyle area of the initial reconstruction model. As shown in fig. 7 (a), the scanning reconstruction apparatus stores color information of each vertex when point cloud data is acquired, and thus, according to the color information, a hair region can be automatically defined. Similar to extracting the hairstyle area in the portrait photo, clustering the texture file of the initial reconstruction model, and defining the hairstyle area in the two-dimensional texture, wherein the 3D hairstyle area is naturally defined correspondingly;
s9-2: giving an optical head template portrait model, giving the spatial positions of eyes, mouth and nose on the optical head template, and giving the coordinates of the optical head template and the target initial model;
s9-3: using a general two-dimensional face detection technology to position eyes, mouths and noses on the two-dimensional texture of the initial model so as to find out the spatial positions of corresponding senses on the corresponding three-dimensional model;
s9-4: registering the optical head template to the target initial model according to the position relation of the optical head template and each sense organ of the target initial model;
s9-5: the overlap of the head template and the target initial model is removed. After the optical head template is aligned with the target model, the facial surfaces may overlap the target model and should be removed. Since the previous steps have delineated the hairstyle region of the target model, the set of target model hairstyle boundary points is known. Meanwhile, the bald template is aligned with the target model, corresponding vertexes on the bald template can be calculated by minimizing the vertex distance, and then the hairstyle boundary point set of the bald template is obtained. The head template is divided into front and rear portions by a boundary formed by these vertices. The second half part, which forms a complete head with the target model; the first half, overlapping the target model face. And deleting the first half, reserving the second half, and combining the second half with the target model to obtain the target model which is completed again after the initial hairstyle area is deleted. As shown in fig. 7 (c).
S10 mainly includes the steps of:
and aligning the hairstyle template with the geometric texture details obtained in the previous step with the target model by using an ICP (inductively coupled plasma) algorithm. After rigid registration, the X coordinate, the Y coordinate and the Z coordinate of the hairstyle template model are respectively and independently scaled, and the size of the model is adjusted, so that the shape of the hairstyle template is preliminarily and approximately adapted to the head of the target model. Finally, the vertex on the boundary of the hair style template model is moved to the position of its nearest neighbor on the head of the target model, so that the hair style template is completely tightly coupled to the target model. As shown in (d) of fig. 7.
Example 2:
the embodiment 2 of the present disclosure provides a person hairstyle portrait reconstruction system for 3D printing, including:
a data acquisition module configured to: acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up;
a two-dimensional hairstyle contour extraction module configured to: extracting a two-dimensional hairstyle outline from the obtained target portrait photo;
a hair style template matching module configured to: matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database;
a two-dimensional hair texture generation module configured to: synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair;
a geometric texture generation module configured to: binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture;
a portrait model generation module configured to: and registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a portrait model for 3D printing.
The working method of the system is the same as the person hair style portrait reconstruction method for 3D printing provided in embodiment 1, and details are not repeated here.
Example 3:
the embodiment 3 of the present disclosure provides a computer-readable storage medium, on which a program is stored, which when executed by a processor, implements the steps in the person hair style portrait reconstruction method for 3D printing according to the embodiment 1 of the present disclosure.
Example 4:
the embodiment 4 of the present disclosure provides an electronic device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and the processor executes the program to implement the steps in the person hair style portrait reconstruction method for 3D printing according to embodiment 1 of the present disclosure.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. A person hair style portrait reconstruction method for 3D printing is characterized by comprising the following steps: the method comprises the following steps:
acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up;
extracting a two-dimensional hairstyle outline from the obtained target portrait photo;
matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database;
synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair;
binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture;
registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a figure portrait model for 3D printing;
processing the acquired geometric texture, comprising:
the displacement graph is adjusted by using the length field, and the normal direction of each vertex is adjusted by using the direction field;
performing rectangular uniform sampling on a hairline streamline graph of the two-dimensional hair texture: drawing a rectangle bounding box with a given width for each streamline in an initial hairline diagram, performing collision detection on each newly drawn rectangle with an existing rectangle, if collision is not generated at all, reserving the rectangle, and when the distance between two adjacent hairline streamlines is smaller than a preset value, colliding the drawn rectangles, and then eliminating the drawn rectangles;
abstracting the hair line graph after uniform sampling, eliminating sawtooth burrs of the graph to a certain degree, and simultaneously thickening each line with a certain width;
calculating a corresponding special displacement diagram according to a gray channel of the abstracted hairline sampling diagram;
given a weight, calculating the weighted sum of the displacement graph and the special displacement graph, wherein the calculation result is used as a new displacement graph;
scanning to obtain an initial reconstruction model of a target, wherein only a consumer-level scanning device is needed, and the reconstruction model meets the requirement of closed manifold;
repairing the obtained initial reconstruction model, including:
the method comprises the steps of defining a hairstyle area of an initial reconstruction model, automatically defining a hair area according to color information of each vertex, clustering texture files of the initial reconstruction model, defining a hairstyle area in a two-dimensional texture, and obtaining a 3D hairstyle area;
giving an optical head template portrait model, giving the spatial positions of eyes, mouth and nose on the optical head template, and giving the coordinates of the optical head template and the target initial model;
positioning eyes, mouths and noses on the two-dimensional textures of the initial model so as to find the spatial positions of corresponding senses on the corresponding three-dimensional model;
registering the optical head template to the target initial model according to the position relation of the optical head template and each sense organ of the target initial model;
the overlap of the head template and the target initial model is removed.
2. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
the construction of the three-dimensional rough hairstyle template model database comprises the following steps:
summarizing various rough template hairstyles and establishing a corresponding three-dimensional template model database;
acquiring a front view and a side view of each hair style template, and converting the front view and the side view into binary views;
carrying out hairstyle space parameterization on each hairstyle template, and mapping all vertexes of the hairstyle template to the virtual unit ball;
determining the correct texture coordinate of each hair style template by utilizing the hair style space;
each hairstyle template is divided into a plurality of parts according to the growth tendency and the structural characteristics of the hair of each part by utilizing the hairstyle space.
3. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
extracting a two-dimensional hair style outline from the obtained target portrait photo, wherein the two-dimensional hair style outline comprises the following steps:
sparsifying and sampling the portrait picture;
the contrast of the portrait photo is enhanced, and the significance of a hairstyle area in the photo is enhanced;
converting the portrait photograph to a Lab color space;
performing k-means clustering on the portrait photos, and extracting a hair region;
and performing one-time opening operation on the clustering result, and taking the maximum connected domain in the center of the clustering result image as a two-dimensional hairstyle contour.
4. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
performing rough hairstyle template matching, comprising:
and for each hair style template model in the database, calculating a pixel difference value and a Hu's invariant moment difference value of the two-dimensional hair style outline and the template model outline, weighting and summing the two difference values to obtain a total score, and taking the hair style template with the minimum total score as a best matching template.
5. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair, wherein the method comprises the following steps:
manually intercepting a hair sample from the input hair close-up;
calculating a hairline, scanning line by line from a first line of pixels, greedily searching a path reaching the lower boundary of the image for each pixel according to the minimum difference value of the gray values between adjacent pixels, reasonably truncating the paths by using a common threshold value, and obtaining the hairline in the image when the gray value difference values of the pixels and all the adjacent pixels are greater than the maximum difference value;
and (3) adopting an improved Image Quilting synthesis method to synthesize two-dimensional hair textures, limiting the block size through the maximum hairline length, and limiting the overlapping width between the blocks through the block size.
6. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
binding the two-dimensional hair texture to the matched hair style template model, and generating a geometric texture according to the texture color information of each vertex, wherein the geometric texture comprises the following steps:
binding the two-dimensional hair texture to the matched template model, and enabling each vertex to have corresponding color information;
converting the two-dimensional hair texture into an HSV color space;
generating a geometric texture on the template model according to the texture color information, normalizing a lightness channel V (x) of the two-dimensional hairstyle texture, and converting the lightness channel V (x) into a displacement graph, wherein the displacement graph defines the offset of each vertex; the vertex displacement direction is determined by the normal direction of each vertex, and the geometric texture detail expanded from the two-dimensional texture to the three-dimensional texture is generated according to the product of the displacement graph and the normal direction.
7. The method for 3D printed person's hair style portrait reconstruction of claim 1, wherein:
registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model, wherein the registration comprises the following steps:
aligning the obtained hairstyle template with the geometric texture details with the target model by using an ICP (inductively coupled plasma) algorithm;
after rigid registration, independently scaling the X coordinate, the Y coordinate and the Z coordinate of the hairstyle template model respectively, and adjusting the size of the model to enable the shape of the hairstyle template to preliminarily adapt to the head of the target model;
the vertex on the boundary of the hair style template model is moved to the position of its nearest neighbor point on the target model head.
8. A person's hair style portrait reconstruction system for 3D printing, characterized by: the method comprises the following steps:
a data acquisition module configured to: acquiring a target portrait photo, wherein the target portrait photo at least comprises two half-body portraits and a hair close-up;
a two-dimensional hairstyle contour extraction module configured to: extracting a two-dimensional hairstyle outline from the obtained target portrait photo;
a hair style template matching module configured to: matching a rough hairstyle template according to the obtained two-dimensional hairstyle outline and a preset three-dimensional rough hairstyle template model database;
a two-dimensional hair texture generation module configured to: synthesizing a two-dimensional hair texture by using a texture synthesis method according to the obtained close-up of the hair;
a geometric texture generation module configured to: binding the two-dimensional hair texture to the matched hairstyle template model, generating a geometric texture according to the texture color information of each vertex, and processing the obtained geometric texture;
a portrait model generation module configured to: registering the hairstyle template model with the generated geometric texture to the repaired initial reconstruction model to obtain a figure portrait model for 3D printing;
processing the acquired geometric texture, comprising:
the displacement graph is adjusted by using the length field, and the normal direction of each vertex is adjusted by using the direction field;
performing rectangular uniform sampling on a hairline streamline graph of the two-dimensional hair texture: drawing a rectangle bounding box with a given width for each streamline in an initial hairline diagram, performing collision detection on each newly drawn rectangle with an existing rectangle, if collision is not generated at all, reserving the rectangle, and when the distance between two adjacent hairline streamlines is smaller than a preset value, colliding the drawn rectangles, and then eliminating the drawn rectangles;
abstracting the hair line graph after uniform sampling, eliminating sawtooth burrs of the graph to a certain degree, and simultaneously thickening each line with a certain width;
calculating a corresponding special displacement diagram according to a gray channel of the abstracted hairline sampling diagram;
given a weight, calculating the weighted sum of the displacement graph and the special displacement graph, wherein the calculation result is used as a new displacement graph;
scanning to obtain an initial reconstruction model of a target, wherein only a consumer-level scanning device is needed, and the reconstruction model meets the requirement of closed manifold;
repairing the obtained initial reconstruction model, including:
the method comprises the steps of defining a hairstyle area of an initial reconstruction model, automatically defining a hair area according to color information of each vertex, clustering texture files of the initial reconstruction model, defining a hairstyle area in a two-dimensional texture, and obtaining a 3D hairstyle area;
giving an optical head template portrait model, giving the spatial positions of eyes, mouth and nose on the optical head template, and giving the coordinates of the optical head template and the target initial model;
positioning eyes, mouths and noses on the two-dimensional textures of the initial model so as to find the spatial positions of corresponding senses on the corresponding three-dimensional model;
registering the optical head template to the target initial model according to the position relation of the optical head template and each sense organ of the target initial model;
the overlap of the head template and the target initial model is removed.
9. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the method for 3D printed person hair style portrait reconstruction according to any one of claims 1-7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the method for 3D printed person hair style portrait reconstruction as claimed in any one of claims 1-7.
CN202110558375.XA 2021-05-21 2021-05-21 Person hair style portrait reconstruction method and system for 3D printing Active CN113269822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110558375.XA CN113269822B (en) 2021-05-21 2021-05-21 Person hair style portrait reconstruction method and system for 3D printing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110558375.XA CN113269822B (en) 2021-05-21 2021-05-21 Person hair style portrait reconstruction method and system for 3D printing

Publications (2)

Publication Number Publication Date
CN113269822A CN113269822A (en) 2021-08-17
CN113269822B true CN113269822B (en) 2022-04-01

Family

ID=77232437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110558375.XA Active CN113269822B (en) 2021-05-21 2021-05-21 Person hair style portrait reconstruction method and system for 3D printing

Country Status (1)

Country Link
CN (1) CN113269822B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113713387A (en) * 2021-08-27 2021-11-30 网易(杭州)网络有限公司 Virtual curling model rendering method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN106952336A (en) * 2017-03-13 2017-07-14 武汉山骁科技有限公司 A kind of mankind's three-dimensional head portrait production method for protecting feature
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
WO2020063527A1 (en) * 2018-09-30 2020-04-02 叠境数字科技(上海)有限公司 Human hairstyle generation method based on multi-feature retrieval and deformation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9679192B2 (en) * 2015-04-24 2017-06-13 Adobe Systems Incorporated 3-dimensional portrait reconstruction from a single photo
CN108463823B (en) * 2016-11-24 2021-06-01 荣耀终端有限公司 Reconstruction method and device of user hair model and terminal
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
CN108629834B (en) * 2018-05-09 2020-04-28 华南理工大学 Three-dimensional hair reconstruction method based on single picture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
CN106952336A (en) * 2017-03-13 2017-07-14 武汉山骁科技有限公司 A kind of mankind's three-dimensional head portrait production method for protecting feature
WO2020063527A1 (en) * 2018-09-30 2020-04-02 叠境数字科技(上海)有限公司 Human hairstyle generation method based on multi-feature retrieval and deformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A virtual 3D hair reconstruction method from a 2D picture;Weng Zufeng 等;《Journal of Computers》;20160430;第27卷(第1期);第1-7页 *
基于图像的头发建模技术综述;包永堂 等;《计算机研究与发展》;20181231;第55卷(第11期);第2543-2556页 *

Also Published As

Publication number Publication date
CN113269822A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN108765550B (en) Three-dimensional face reconstruction method based on single picture
KR101635730B1 (en) Apparatus and method for generating montage, recording medium for performing the method
CN103208133B (en) The method of adjustment that in a kind of image, face is fat or thin
DiPaola Extending the range of facial types
Shen et al. Deepsketchhair: Deep sketch-based 3d hair modeling
CN106652015B (en) Virtual character head portrait generation method and device
US11587288B2 (en) Methods and systems for constructing facial position map
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
CN106652037B (en) Face mapping processing method and device
JP7462120B2 (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
WO2021140510A2 (en) Large-scale generation of photorealistic 3d models
de Juan et al. Re-using traditional animation: methods for semi-automatic segmentation and inbetweening
CN113269822B (en) Person hair style portrait reconstruction method and system for 3D printing
CN117157673A (en) Method and system for forming personalized 3D head and face models
US11769309B2 (en) Method and system of rendering a 3D image for automated facial morphing with a learned generic head model
CN116433812B (en) Method and device for generating virtual character by using 2D face picture
CN116740281A (en) Three-dimensional head model generation method, three-dimensional head model generation device, electronic equipment and storage medium
Li et al. Computer-aided 3D human modeling for portrait-based product development using point-and curve-based deformation
He et al. Data-driven 3D human head reconstruction
CN115936796A (en) Virtual makeup changing method, system, equipment and storage medium
Zhang et al. Neural modeling of portrait bas-relief from a single photograph
Yanghua Crypko-a new workflow for anime character creation
TWI525585B (en) An image processing system and method
CN117688235A (en) Hairstyle recommendation method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant