CN112419487B - Three-dimensional hair reconstruction method, device, electronic equipment and storage medium - Google Patents

Three-dimensional hair reconstruction method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112419487B
CN112419487B CN202011413467.0A CN202011413467A CN112419487B CN 112419487 B CN112419487 B CN 112419487B CN 202011413467 A CN202011413467 A CN 202011413467A CN 112419487 B CN112419487 B CN 112419487B
Authority
CN
China
Prior art keywords
hair
dimensional
image
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011413467.0A
Other languages
Chinese (zh)
Other versions
CN112419487A (en
Inventor
郑彦波
宋新慧
袁燚
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011413467.0A priority Critical patent/CN112419487B/en
Publication of CN112419487A publication Critical patent/CN112419487A/en
Application granted granted Critical
Publication of CN112419487B publication Critical patent/CN112419487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a three-dimensional hair reconstruction method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair direction diagram; constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model; optimizing the hair shape of the initial three-dimensional hair data through the hair generation model to obtain target three-dimensional hair data; the target three-dimensional hair data may be micro-rendered into a two-dimensional hair image, minimizing the difference of the two-dimensional hair image from the original hair image by optimizing the hair generation model. The three-dimensional hair data of the target obtained by the method is more accurate and more accords with the growth rule of real hair.

Description

Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and apparatus for three-dimensional reconstruction of hair, an electronic device, and a storage medium.
Background
Modeling of virtual three-dimensional characters, animals is a very important part in games, science fiction movies, VR, etc. And the high-precision three-dimensional modeling of the human and animal life is indispensable. How to construct a realistic three-dimensional model of hair requires precise identification of human hair in an image.
And reconstructing the three-dimensional hair of the mandarin, and training a deep neural network model by using the data of pairing the image and the three-dimensional hair. After training the model, inputting a two-dimensional image, and outputting three-dimensional hair data after model calculation.
Because individual hair has large difference, a training sample cannot cover all objects, the three-dimensional hair reconstruction result is directly output through the deep neural network model in the mode, and the accuracy of the three-dimensional hair reconstruction result is not high.
Disclosure of Invention
The embodiment of the application provides a three-dimensional hair reconstruction method which is used for improving the accuracy of three-dimensional hair reconstruction.
The embodiment of the application provides a three-dimensional hair reconstruction method, which comprises the following steps:
acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair direction diagram;
constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model;
optimizing the hair shape of the initial three-dimensional hair data by a hair generation model to obtain target three-dimensional hair data;
the target three-dimensional hair data is micro-renderable into a two-dimensional hair image, the difference of which from the original hair image is minimized by optimizing the hair generation model.
In an embodiment, the acquiring an original hair image and detecting a hair direction of the original hair image, generating a hair pattern, includes:
performing hair edge detection on the target image to obtain the original hair image;
and filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, so as to form the hair pattern.
In an embodiment, the initial three-dimensional hair data comprises: three-dimensional position coordinates of a plurality of points corresponding to each virtual hair; constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model, wherein the initial three-dimensional hair data comprises:
and according to the hair directions of different positions indicated by the hair pattern, starting to extend from the preset root position of the three-dimensional target model, and obtaining three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
In an embodiment, said optimizing the hair shape of said initial three-dimensional hair data by means of a hair generation model, obtaining target three-dimensional hair data, comprises:
for each virtual hair, taking three-dimensional position coordinates of a plurality of points corresponding to the virtual hair as input of the hair generation model, and obtaining the optimized three-dimensional position coordinates of a plurality of points output by the hair generation model;
and obtaining the target three-dimensional hair data according to the three-dimensional position coordinates of the plurality of points after each virtual hair is optimized.
In an embodiment, taking the three-dimensional position coordinates of the plurality of points as input to the hair-generation model, obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair-generation model comprises:
inputting the three-dimensional position coordinates of the points into an encoding module of the hair generation model, and outputting a hair characteristic vector;
inputting the hair characteristic vector into a decoding module of the hair generation model, and outputting three-dimensional position coordinates of the optimized multiple points.
In an embodiment, before obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair-generation model using the three-dimensional position coordinates of the plurality of points as input to the hair-generation model, the method further comprises:
acquiring position coordinates of a plurality of points belonging to the same real hair;
and performing machine learning by utilizing the position coordinates of the points belonging to the same real hair, and training to obtain the hair generation model.
In an embodiment, the micro-rendering the target three-dimensional hair data into a two-dimensional hair image comprises:
constructing a virtual camera facing the three-dimensional target model;
and projecting the target three-dimensional hair data to a two-dimensional plane with the viewpoint of the virtual camera to form a two-dimensional hair image.
In an embodiment, minimizing the difference of the two-dimensional hair image from the two-dimensional hair image by optimizing the hair generation model comprises:
calculating a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image;
calculating a second difference between a hair pattern of the two-dimensional hair image and a hair pattern of the original hair image;
iteratively optimizing the hair generation model to minimize a sum of the first and second differences.
The embodiment of the application also provides a three-dimensional hair reconstruction device, which comprises:
the direction detection module is used for acquiring an original hair image, detecting the hair direction of the original hair image and generating a hair direction diagram;
the model construction module is used for constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model;
the hair optimization module is used for optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
and the reverse transfer module is used for micro-rendering the target three-dimensional hair data into a two-dimensional hair image, and optimizing the hair generation model to minimize the difference between the two-dimensional hair image and the two-dimensional hair image.
The embodiment of the application also provides electronic equipment, which comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the hair three-dimensional reconstruction method described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program executable by a processor to perform the hair three-dimensional reconstruction method described above.
According to the technical scheme provided by the embodiment of the application, the initial three-dimensional hair data is constructed based on the hair direction by detecting the hair direction of the original hair image, then the hair shape of the initial hair data is optimized through the hair generation model to obtain the target three-dimensional hair data, the target three-dimensional hair data can be micro-rendered into the two-dimensional hair image, the difference between the two-dimensional hair image and the original hair image is minimum through optimizing the hair generation model, and the obtained target three-dimensional hair data is more accurate and accords with the growth rule of real hair.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a flow chart of a three-dimensional hair reconstruction method according to an embodiment of the present application;
FIG. 3 is a graph comparing effects of an embodiment of the present application before and after treatment of a hair-generation model;
FIG. 4 is a detailed flowchart of step S340 in the corresponding embodiment of FIG. 2;
FIG. 5 is a schematic illustration of the hair texture optimization provided by the post-embodiment of the present application;
FIG. 6 is a schematic illustration of a three-dimensional hair reconstruction process provided by an embodiment of the present application;
fig. 7 is a block diagram of a three-dimensional hair reconstruction device provided by an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 100 may be used to perform the hair three-dimensional reconstruction method provided by embodiments of the present application. As shown in fig. 1, the electronic device 100 includes: one or more processors 102, one or more memories 104 storing processor-executable instructions. Wherein the processor 102 is configured to perform a hair three-dimensional reconstruction method provided by the following embodiments of the application.
The processor 102 may be a gateway, an intelligent terminal, or a device comprising a Central Processing Unit (CPU), an image processing unit (GPU), or other form of processing unit having data processing capabilities and/or instruction execution capabilities, may process data from other components in the electronic device 100, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 102 to implement the hair three-dimensional reconstruction method described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
In one embodiment, the electronic device 100 shown in FIG. 1 may also include an input device 106, an output device 108, and a data acquisition device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device 100 may have other components and structures as desired.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like. The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like. The data acquisition device 110 may acquire images of the subject and store the acquired images in the memory 104 for use by other components. The data acquisition device 110 may be a camera, for example.
In an embodiment, the components in the electronic device 100 for implementing the hair three-dimensional reconstruction method according to the embodiment of the present application may be integrally disposed, or may be separately disposed, such as integrally disposing the processor 102, the memory 104, the input device 106, and the output device 108, and separately disposing the data acquisition device 110.
In an embodiment, the example electronic device 100 for implementing the hair three-dimensional reconstruction method of the embodiment of the present application may be implemented as a smart terminal such as a smart phone, a tablet computer, a desktop computer, a smart watch, a vehicle-mounted device, etc.
Fig. 2 is a flow chart of a three-dimensional hair reconstruction method according to an embodiment of the present application. As shown in fig. 2, the method includes the following steps S310 to S340.
Step S310, acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair direction diagram.
Wherein the hair may be human hair or animal hair. The following embodiments are mainly directed to the three-dimensional reconstruction of human hair, for example, animal hair can be referred to as the three-dimensional reconstruction of human hair.
The original hair image may be a two-dimensional hair image or an animal image. The raw hair image may be acquired by a camera or may be acquired from a local or other device. To distinguish from the two-dimensional hair image re-rendered hereinafter, the hair image acquired here may be referred to as the original hair image. In an embodiment, a hair edge detection may be performed on the target image to obtain an original hair image. The target image may be a person image, an animal image. The original hair image can be a hair image segmented in the character image, or can be an image of the area where the animal is located, which is cut out in the animal image (namely, the interference of the surrounding environment is removed).
Wherein the hair pattern is used to indicate the hair direction at different positions in the original hair image. And filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, so as to form the hair pattern.
In an embodiment, the linear filter may be a Gabor filter, and the original hair image is filtered using the Gabor filter, so that a direction value representing the direction of the texture may be obtained. The direction value may be regarded as an angle value of the texture direction of the local area where each pixel is located, i.e. the hair direction. The corresponding angle value of each pixel constitutes the hair pattern.
And step 320, constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model.
In an embodiment, the three-dimensional object model may be a three-dimensional head model when the hair is hair, and a three-dimensional animal model when the hair is animal hair. The initial three-dimensional hair data is relative to the target three-dimensional hair data of step S330, the initial three-dimensional hair data being initially reconstructed hair space position data. And the target three-dimensional hair data is the hair space position data after the initial three-dimensional hair data is optimized.
The initial three-dimensional hair data may include three-dimensional position coordinates of a plurality of points corresponding to each virtual hair. For example, three-dimensional root positions of 1024 roots may be set in advance. And according to the hair directions of different positions indicated by the hair pattern, starting to extend from the preset root position of the three-dimensional target model, and obtaining three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
The extension from the root position may be obtained by increasing or decreasing the x and y coordinates in accordance with the hair direction from the root position, and obtaining new x and y coordinates. The z-coordinate is calculated by projecting the new x, y-coordinate to the three-dimensional object model and taking the z-coordinate of the projected point as the coordinate of the hair at that location. Like bang, a hair on the forehead is growing down against the scalp.
Virtual hair is computer drawn hair as opposed to real hair. For example, a virtual hair may be constructed by extending a segment from the root position to obtain a three-dimensional point coordinate, extending the next segment to obtain the next three-dimensional point coordinate, and so on. In one embodiment, 100 point coordinates may be obtained for each virtual hair, and these 100 point coordinates may be considered as three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
And step S330, optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data.
In an embodiment, the hair generation model may be trained from a VAE (Variational Autoencoder) model. Specifically, position coordinates of a plurality of points belonging to the same real hair are obtained; machine learning is performed by using position coordinates of a plurality of points belonging to the same real hair, and training is performed to obtain the hair generation model. For example, the known position coordinates (x, y, z) of 100 points on a hair can be taken as input (i.e. 100×3 numbers are input), an 8-dimensional feature vector is obtained through the encoding module (encode) of the VAE model, and the 8-dimensional feature vector is output as the position coordinates of 100 points through the decoding module (decode). And adjusting parameters of the encoding module and the decoding module to make the position coordinates of the output and the input as identical as possible.
The VAE model may learn the distribution relationship between the coordinates of multiple points on a single hair. Then, for each virtual hair, three-dimensional position coordinates of a plurality of points of the virtual hair are input into the hair generation model, and the three-dimensional position coordinates of the plurality of optimized points output by the hair generation model are obtained. In an embodiment, three-dimensional position coordinates of a plurality of points of the same virtual hair may be input to the encoding module of the hair generation model, outputting a hair feature vector. The hair feature vector is used to characterize the shape features of individual hairs. And inputting the hair characteristic vector into a decoding module of the hair generation model, and outputting three-dimensional position coordinates of the optimized multiple points. Whereas the encoding and decoding modules of the hair-generation model can be trained in the manner of machine learning described above.
For example, 1024 virtual hairs, three-dimensional coordinates of 100 points per hair, after passing through the encoding module, can obtain 1024 (hairs) ×8 (dimension of feature vector per hair) data. After the data passes through the decoding module, the three-dimensional position coordinates of 1024×100 points after optimization can be obtained. After the hair generation module optimizes, the three-dimensional position coordinates of the virtual hair accord with the hair distribution rule, and no distortion exists. Assuming that there are 100 points for a virtual hair and a total of 1024 virtual hairs, the target three-dimensional hair data may include three-dimensional position coordinates (x, y, z) of 1024×100 points after optimization. The line with 100 points belonging to a virtual hair is a hairline.
As shown in fig. 3, (a) is a virtual hair constructed from a hair pattern, and (b) is an optimized virtual hair. From (a), it can be seen that the virtual hair constructed from the hair pattern has a large difference from the real hair, does not conform to the hair distribution rule, and has a strange twist. And the optimized virtual hair is more natural and more similar to real hair.
Step S340 of micro-rendering the target three-dimensional hair data into a two-dimensional hair image, minimizing the difference of the two-dimensional hair image from the original hair image by optimizing the hair generation model.
The two-dimensional hair image may be regarded as a 2D (two-dimensional) image obtained by photographing a 3D (three-dimensional) model of hair corresponding to the target three-dimensional hair data. Micro-renderable refers to a technique of throwing a 3D scene into a renderer to obtain a 2D image. Because the target three-dimensional hair data is obtained by three-dimensional reconstruction according to the original hair image, and the two-dimensional hair image is obtained by photographing a hair 3D model corresponding to the target three-dimensional hair data, the two-dimensional hair image and the original hair image should be identical in ideal condition. Parameters of the hair generation model can be readjusted based on the two-dimensional hair image, minimizing differences between the two-dimensional hair image and the original hair image. The target three-dimensional hair data obtained after the hair generation model processing when the difference is minimum can be regarded as a hair three-dimensional reconstruction result.
According to the technical scheme provided by the embodiment of the application, the initial three-dimensional hair data is constructed based on the hair direction by detecting the hair direction of the original hair image, then the hair shape of the initial hair data is optimized through the hair generation model to obtain the target three-dimensional hair data, the target three-dimensional hair data can be micro-rendered into the two-dimensional hair image, the difference between the two-dimensional hair image and the original hair image is minimum through optimizing the hair generation model, and the obtained target three-dimensional hair data is more accurate and accords with the growth rule of real hair.
In an embodiment, as shown in fig. 4, the step S340 may include the following steps: step S341 to step S342.
Step S341: and constructing a virtual camera facing the three-dimensional target model.
For example, assuming that the three-dimensional object model is a head model, the three-dimensional object-oriented model may be two-dimensional image acquisition toward a human face. The virtual camera is relative to the real camera, and the virtual camera can be constructed by setting camera parameters such as camera position, focal length, field angle and the like to simulate the effect of real shooting.
Step S342: and projecting the target three-dimensional hair data to a two-dimensional plane with the viewpoint of the virtual camera to form a two-dimensional hair image.
For example, assuming that the target three-dimensional hair data contains three-dimensional position coordinates of 1024×100 (100 points per virtual hair) points, the coordinates of each point with respect to the virtual camera can be calculated. The virtual camera is used as an origin to establish a coordinate system, the three-dimensional hair data world coordinate system of the target is converted into a camera coordinate system, then perspective projection transformation is carried out, and the three-dimensional hair data world coordinate system is converted into an image coordinate system from the camera coordinate system. Coordinates of each three-dimensional position coordinate in the two-dimensional hair image are obtained. Since hair consists of hair, not a normal dough sheet, there is a phenomenon of hair overlapping, so only the hair nearest (i.e., the outermost) to the camera can be rendered. According to the requirement, the virtual hair corresponding to each pixel point in the two-dimensional hair image can be recorded.
Step S343: a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image is calculated.
Step S344: a second difference between the hair pattern of the two-dimensional hair image and the hair pattern of the original hair image is calculated.
The sequence of step S343 and step S344 is not limited, and the hair contour may be extracted by the edge detection algorithm, and the hair contour corresponds to the hair boundary. The first difference is used to characterize a difference between a hair contour of the original hair image and a hair contour of the two-dimensional hair image. In an embodiment, the magnitude of the difference between hair contours may be characterized by calculating the Euclidean distance between corresponding pixel points. The hair pattern of the two-dimensional hair image can also be detected by means of a gabor filter. The second difference is used to characterize the difference between the hair pattern of the two-dimensional hair image and the hair pattern of the original hair image. In an embodiment, the differences between hair patterns may be characterized by calculating differences in the orientation values of the corresponding pixel points.
Step S345: iteratively optimizing the hair generation model to minimize a sum of the first and second differences.
In one embodiment, the sum of the first difference and the second difference may be used as a penalty, the gradient is calculated by a back propagation algorithm, the parameters of the hair generation model are adjusted, 1024 x 8 (the dimension of the feature vector per hair) feature vectors are changed, and the 1024 x 8 feature vectors are passed through the decoding of the VAE. And obtaining target three-dimensional hair data which accords with the outline and texture of the photo hair and accords with the rule of the real hair. The sum of the final first difference and the second difference is minimized. The target three-dimensional hair data when the sum of the first difference and the second difference is minimal may be considered as a hair three-dimensional reconstruction result of the original hair image.
As shown in fig. 5, (a) is the hair contour from the real photo segmentation, (b) is the hair currently growing (i.e., the initial three-dimensional hair data), and (c) is the hair pattern, with different colors representing different directions.
See (d), a and b are compared, and a and b differ in profile, requiring the excess portion to be compressed inwardly and the insufficient portion to be stretched outwardly. c and b are compared, and for each hair in the 3d data of b, the direction of the hair can be calculated using two adjacent points, and then the projection result (b-plot) is used to obtain the direction of the corresponding position in c, see (e), line 52 is the hair direction of b, and line 51 is the direction result detected by the real hair. The line 52 needs to become the line 51. Arrow 53 is the trend of change. The direction of the hair of b can be optimized by the hair generation model, closer to the direction of the real hair.
As shown in fig. 6, a face-head image is input, and the hair area of the face-head image can be used as an original hair image.
By the technical scheme provided by the embodiment of the application, the projection (namely the two-dimensional hair image) of the generated target three-dimensional hair data can be more similar to the outline and texture of a real photo (namely the original hair image), the shape of each hair is more in accordance with the reality rule, and the three-dimensional reconstruction result of the hair is more accurate.
The following are embodiments of the device according to the application, which may be used to perform the embodiments of the hair three-dimensional reconstruction method according to the application described above. For details not disclosed in the embodiments of the device of the present application, please refer to the embodiments of the three-dimensional hair reconstruction method of the present application.
Fig. 7 is a block diagram of a three-dimensional hair reconstruction device according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: a direction detection module 810, a model construction module 820, a hair optimization module 830, and a reverse pass module 840.
A direction detection module 810, configured to acquire an original hair image, and detect a hair direction of the original hair image, to generate a hair pattern;
the model construction module 820 is configured to construct initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model;
a hair optimization module 830 for optimizing the hair shape of the initial three-dimensional hair data by a hair generation model to obtain target three-dimensional hair data;
a backward pass module 840 for micro-rendering the target three-dimensional hair data into a two-dimensional hair image, minimizing differences between the two-dimensional hair image and the two-dimensional hair image by optimizing the hair generation model.
The implementation process of the functions and actions of each module in the device is specifically shown in the implementation process of the corresponding steps in the three-dimensional hair reconstruction method, and is not repeated here.
In the several embodiments provided in the present application, the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A method of three-dimensional reconstruction of hair, comprising:
acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair direction diagram;
constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model;
optimizing the hair shape of the initial three-dimensional hair data by a hair generation model to obtain target three-dimensional hair data;
micro-rendering the target three-dimensional hair data into a two-dimensional hair image;
calculating a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image;
calculating a second difference between a hair pattern of the two-dimensional hair image and a hair pattern of the original hair image;
iteratively optimizing the hair generation model to minimize a sum of the first and second differences.
2. The method of claim 1, wherein the acquiring an original hair image and detecting a hair direction of the original hair image, generating a hair pattern, comprises:
performing hair edge detection on the target image to obtain the original hair image;
and filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, so as to form the hair pattern.
3. The method of claim 1, wherein the initial three-dimensional hair data comprises: three-dimensional position coordinates of a plurality of points corresponding to each virtual hair; constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model, wherein the initial three-dimensional hair data comprises:
and according to the hair directions of different positions indicated by the hair pattern, starting to extend from the preset root position of the three-dimensional target model, and obtaining three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
4. A method according to claim 3, wherein said optimizing the hair shape of said initial three-dimensional hair data by means of a hair generation model to obtain target three-dimensional hair data comprises:
for each virtual hair, taking three-dimensional position coordinates of a plurality of points corresponding to the virtual hair as input of the hair generation model, and obtaining the optimized three-dimensional position coordinates of a plurality of points output by the hair generation model;
and obtaining the target three-dimensional hair data according to the three-dimensional position coordinates of the plurality of points after each virtual hair is optimized.
5. The method according to claim 4, wherein obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair-generation model using the three-dimensional position coordinates of the plurality of points as input to the hair-generation model comprises:
inputting the three-dimensional position coordinates of the points into an encoding module of the hair generation model, and outputting a hair characteristic vector;
inputting the hair characteristic vector into a decoding module of the hair generation model, and outputting three-dimensional position coordinates of the optimized multiple points.
6. The method according to claim 4, wherein prior to taking the three-dimensional position coordinates of the plurality of points as input to the hair-generation model, obtaining the three-dimensional position coordinates of the optimized plurality of points output by the hair-generation model, the method further comprises:
acquiring position coordinates of a plurality of points belonging to the same real hair;
and performing machine learning by utilizing the position coordinates of the points belonging to the same real hair, and training to obtain the hair generation model.
7. The method of claim 1, wherein said micro-rendering the target three-dimensional hair data into a two-dimensional hair image comprises:
constructing a virtual camera facing the three-dimensional target model;
and projecting the target three-dimensional hair data to a two-dimensional plane with the viewpoint of the virtual camera to form a two-dimensional hair image.
8. A three-dimensional hair restoration device, comprising:
the direction detection module is used for acquiring an original hair image, detecting the hair direction of the original hair image and generating a hair direction diagram;
the model construction module is used for constructing initial three-dimensional hair data according to the hair pattern and a preset three-dimensional target model;
the hair optimization module is used for optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
a backward pass module for micro-rendering the target three-dimensional hair data into a two-dimensional hair image, calculating a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image; calculating a second difference between a hair pattern of the two-dimensional hair image and a hair pattern of the original hair image; iteratively optimizing the hair generation model to minimize a sum of the first and second differences.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the hair three-dimensional reconstruction method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the hair three-dimensional reconstruction method according to any one of claims 1-7.
CN202011413467.0A 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium Active CN112419487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011413467.0A CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011413467.0A CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112419487A CN112419487A (en) 2021-02-26
CN112419487B true CN112419487B (en) 2023-08-22

Family

ID=74776306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011413467.0A Active CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112419487B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907715B (en) * 2021-03-19 2024-04-12 网易(杭州)网络有限公司 Hair model making method, device, storage medium and computer equipment
CN113129347B (en) * 2021-04-26 2023-12-12 南京大学 Self-supervision single-view three-dimensional hairline model reconstruction method and system
CN113313802B (en) * 2021-05-25 2022-03-11 完美世界(北京)软件科技发展有限公司 Image rendering method, device and equipment and storage medium
CN113658326A (en) * 2021-08-05 2021-11-16 北京奇艺世纪科技有限公司 Three-dimensional hair reconstruction method and device
CN114187633B (en) * 2021-12-07 2023-06-16 北京百度网讯科技有限公司 Image processing method and device, and training method and device for image generation model
CN114723888B (en) * 2022-04-08 2023-04-07 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114758391B (en) * 2022-04-08 2023-09-12 北京百度网讯科技有限公司 Hair style image determining method, device, electronic equipment, storage medium and product
CN114693856B (en) * 2022-05-30 2022-09-09 腾讯科技(深圳)有限公司 Object generation method and device, computer equipment and storage medium
CN116051729B (en) * 2022-12-15 2024-02-13 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2017185301A1 (en) * 2016-04-28 2017-11-02 华为技术有限公司 Three-dimensional hair modelling method and device
CN109064547A (en) * 2018-06-28 2018-12-21 北京航空航天大学 A kind of single image hair method for reconstructing based on data-driven
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN110060323A (en) * 2019-03-18 2019-07-26 叠境数字科技(上海)有限公司 The rendering method of three-dimensional hair model opacity
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium
CN111540021A (en) * 2020-04-29 2020-08-14 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129B (en) * 2012-06-20 2015-09-30 浙江大学 A kind of scalp electroacupuncture based on single image and portrait edit methods
EP3241187A4 (en) * 2014-12-23 2018-11-21 Intel Corporation Sketch selection for rendering 3d model avatar
WO2017181332A1 (en) * 2016-04-19 2017-10-26 浙江大学 Single image-based fully automatic 3d hair modeling method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185301A1 (en) * 2016-04-28 2017-11-02 华为技术有限公司 Three-dimensional hair modelling method and device
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
CN109064547A (en) * 2018-06-28 2018-12-21 北京航空航天大学 A kind of single image hair method for reconstructing based on data-driven
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN110060323A (en) * 2019-03-18 2019-07-26 叠境数字科技(上海)有限公司 The rendering method of three-dimensional hair model opacity
CN111540021A (en) * 2020-04-29 2020-08-14 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种快速可重用的三维头发模型建模方法;李康;耿国华;周明全;韩翼;;西北大学学报(自然科学版)(第02期);209-213 *

Also Published As

Publication number Publication date
CN112419487A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112419487B (en) Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
CN109325437B (en) Image processing method, device and system
CN114694221B (en) Face reconstruction method based on learning
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
KR101635730B1 (en) Apparatus and method for generating montage, recording medium for performing the method
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
US20210012550A1 (en) Additional Developments to the Automatic Rig Creation Process
Hu et al. Capturing braided hairstyles
JP2024501986A (en) 3D face reconstruction method, 3D face reconstruction apparatus, device, and storage medium
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
CN109002763B (en) Method and device for simulating human face aging based on homologous continuity
CN108615256B (en) Human face three-dimensional reconstruction method and device
KR20090092473A (en) 3D Face Modeling Method based on 3D Morphable Shape Model
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
CN110930503A (en) Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
CN113808277A (en) Image processing method and related device
CN116416376A (en) Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
CN111680573A (en) Face recognition method and device, electronic equipment and storage medium
KR20190069750A (en) Enhancement of augmented reality using posit algorithm and 2d to 3d transform technique
CN109035380B (en) Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
CN112862672B (en) Liu-bang generation method, device, computer equipment and storage medium
CN112288861B (en) Single-photo-based automatic construction method and system for three-dimensional model of human face
US20230079478A1 (en) Face mesh deformation with detailed wrinkles
CN114820907A (en) Human face image cartoon processing method and device, computer equipment and storage medium
CN112184611A (en) Image generation model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant