CN113112616A - Three-dimensional generation method based on improved editable - Google Patents

Three-dimensional generation method based on improved editable Download PDF

Info

Publication number
CN113112616A
CN113112616A CN202110365504.3A CN202110365504A CN113112616A CN 113112616 A CN113112616 A CN 113112616A CN 202110365504 A CN202110365504 A CN 202110365504A CN 113112616 A CN113112616 A CN 113112616A
Authority
CN
China
Prior art keywords
hair
dimensional
hairstyle
hair style
gan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110365504.3A
Other languages
Chinese (zh)
Inventor
韩晓迪
张菁
张天驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202110365504.3A priority Critical patent/CN113112616A/en
Publication of CN113112616A publication Critical patent/CN113112616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an editable three-dimensional hairstyle generation method. In particular to an editable three-dimensional Hair style generation method based on improved Hair-GAN. The method mainly comprises the following steps: first, an image editing interface is designed, and through the interface, a user can design a hairstyle shape and a hairline direction which are self-desired. And then, searching an image of which the shape and the hair direction are matched with the edited hair style in a hair style picture database, and further performing detail processing on the searched picture sample. And transmitting a two-dimensional directional diagram, a confidence diagram and the like of the processed picture sample as input information into an improved Hair-GAN framework, and converting the input two-dimensional information into a corresponding three-dimensional directional voxel field by the framework to guide the generation of a three-dimensional hairstyle. The improved haircut-GAN frame is improved according to the VAE-GAN frame, and aims to reduce the training difficulty of generating a voxel field in the direction of the three-dimensional hairstyle, improve the generation efficiency and accuracy, ensure that the designed three-dimensional hairstyle is more real and better meet the requirements of users.

Description

Three-dimensional generation method based on improved editable
Technical Field
The invention relates to the field of virtual reality technology, computer graphics and computer vision, in particular to an editable three-dimensional hairstyle generation method based on improved Hair-GAN.
Background
With the increasing development of emerging virtual reality and augmented reality technologies, a user group pursuing personalization is enthusiastic to the interactive fusion of the real world and the digital world, so that the application of virtual character modeling open to end users is widely welcomed by people. The hair style modeling is an important component of virtual role modeling, and different hair styles can reflect the identity, character and even preference of different roles. The method realizes the modeling of the hair style similar to the real world input, and can ensure that the virtual role is more vivid, lifelike and personalized. However, due to the diversity of real hair styles and their intricate geometry, the flexible creation of real virtual three-dimensional hair styles, which users need, has been a significant challenge in graphical research.
In recent years, deep learning algorithms have become widely used in many research fields and have become a popular trend. One of the characteristics of the deep learning algorithm is that the database model information is effectively converted into high-dimensional characteristic parameter representation through the learning of diversified data in the database. In recent years, many scholars are inspired by deep learning thought in a method for generating a three-dimensional hairstyle, and the thought is introduced into three-dimensional hairstyle modeling research based on images, however, training difficulty and lack of reality of a generated result often occur in the process of generating a model. In the current three-dimensional hairstyle reconstruction process, people usually reconstruct the hairstyle according to the existing hairstyle in the image, but the hairstyle desired by the user is not created according to the needs and ideas of the user, namely, the interactivity is lacked. Therefore, the three-dimensional hairstyle which can be freely edited and designed by a user is created by using the deep learning algorithm, so that the three-dimensional hairstyle has important breakthrough significance.
The invention researches on editable three-dimensional hairstyles, and a user can design the shape and the hair direction of the hairstyle desired by the user through an image editing interface. Then, searching in a database with a large number of hair style pictures, searching out an image with the shape and the hair direction matched with the edited image, and further performing detail processing on the searched image. And then introducing a two-dimensional directional diagram, a confidence diagram and the like of the screened pictures into an improved Hair-GAN framework as input information. The improved Hair-GAN framework converts the input two-dimensional information into a corresponding three-dimensional direction voxel field, and generates a three-dimensional hairstyle in the guidance and constraint of the three-dimensional direction voxel field.
Disclosure of Invention
The invention aims to provide a three-dimensional hair style generation method which can be freely designed and edited by a user and reduce the labor consumption for making a three-dimensional model. On the premise of improving the authenticity of generating the three-dimensional hairstyle, the invention improves the interactivity, the flexibility and the practicability of the three-dimensional hairstyle, so that a user can create the three-dimensional hairstyle which is similar to the real world input, meets the aesthetic requirements of the user and can freely edit the design by the user.
The specific implementation of the invention is as follows:
(1) through an image editing interface, a user designs the shape and the hair direction of the hairstyle desired by the user;
(2) processing a database with a large number of hair style pictures, and searching a hair style picture similar to an edited hair style in the database as a candidate sample;
(3) modifying the details of the candidate sample retrieved in the step (2);
(4) improving a hairgan framework according to the structure of the VAE-GAN framework, and improving the generation efficiency and accuracy of the hairgan framework;
(5) generating corresponding two-dimensional direction information, a confidence map and a depth map from the modified hairstyle picture as input information, inputting the input information into an improved Hair-GAN, and generating a three-dimensional direction voxel field corresponding to the hairstyle by the improved Hair-GAN;
(6) and (5) generating three-dimensional hair under the constraint and guidance of the generated three-dimensional direction voxel field, and further completing the generation of the three-dimensional hairstyle.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the invention and not to limit the invention:
FIG. 1 is a diagram of an editable three-dimensional hair styling technique route;
FIG. 2 is a process of hair style editing and hair style image candidate sample screening;
FIG. 3 is a diagram of a modified Hair-GAN framework;
fig. 4 is a process of guiding the generation of three-dimensional hair according to a three-dimensional directional voxel field.
Detailed Description
The invention discloses an editable three-dimensional hairstyle generation method. In particular to an editable three-dimensional Hair style generation method based on improved Hair-GAN. The method mainly comprises the following steps: firstly, an image editing interface is designed, and through the interface, a user can design the shape, the hair direction and the color of the hairstyle which are self-desired. Then, searching in a database with a large number of hair style pictures, searching out an image with the shape and the hair direction matched with the edited image, and further performing detail processing on the searched image. And then transmitting a two-dimensional directional diagram, a confidence diagram and the like of the screened pictures as input information into a framework of the improved Hair-GAN. The improved hairgan framework is improved according to the VAE-GAN framework, and the aim is to improve the generation efficiency and the accuracy. And converting the input two-dimensional information into a corresponding three-dimensional direction voxel field, and generating a three-dimensional hairstyle in the guidance and constraint of the three-dimensional direction voxel field. The technical route of the specific implementation steps is shown in figure 1.
The method comprises the following concrete steps:
step 01, designing an interactive interface capable of editing a hair style, and designing options such as hair curling, hair straightening, hair style colors and the like on the interface, wherein the options can be selected by a user;
step 02, selecting a large number of hair style pictures as a database, and preprocessing the hair style pictures in the database;
step 03, matching and screening the outline and the hair structure of the hair style sample edited by the user and the hair style picture in the database, selecting a candidate sample similar to the edited sample, and modifying the candidate sample;
and step 04, carrying out face alignment on the candidate sample and the three-dimensional human upper body model, segmenting the hair style in the candidate sample, and further calculating a two-dimensional direction information image, a confidence map and a depth map of the human body model of the hair style. Inputting the calculated information into an improved Hair-GAN framework as input information X;
and step 05, improving the Hair-GAN framework, and adding an encoder on the basis of the Hair-GAN framework by referring to the structure of the VAE-GAN to improve the accuracy of the result generated by the framework. Inputting the input information X generated in the step 04 into a modified Hair-GAN framework to generate a corresponding three-dimensional direction voxel field. And generating a corresponding three-dimensional hairstyle under the constraint and guidance of the three-dimensional direction voxel field.
Step 01 of the present invention proposes to create an interactive interface that allows the user to design the desired contour, direction, color, etc. of the hairstyle.
The step 01 specifically comprises the following steps:
0101, designing a hairstyle editing interface. In this interface, the present invention selects a standard reference character upper body portrait (the reference character has no hairstyle) for editing the hairstyle. In this interface, the relevant options for the selectable hair style types are designed. The options comprise selection of hair straightening and hair curling, and the options also comprise selection of hair style colors;
0102, the user edits the designed hairstyle. The user can freely edit the outline of the hairstyle and the hair direction and the like which is self-wanted by using the electronic brush on the basis of referring to the semi-body portrait on the character. After the user edits the outline of the hairstyle, the user can select the color of the hairstyle desired by the user through the color option, and can also select the type of the hairstyle through the options of straightening hair and curling hair.
In the invention, the database containing a large number of hair style pictures is processed in the step 02, and the step aims to process and classify the hair style pictures so as to facilitate the subsequent work to screen the hair style pictures in the database. The image retrieval of hair style editing requires the use of a data set containing a large number of hair style images, and therefore, the focus of this step is the process of data set processing.
The step 02 mainly comprises the following two steps:
step 0201, hair zone segmentation. Obtaining a binary hair region mask for each selected image using stroke-based interactive segmentation and Paint Selection matting tools;
step 0202, Direction guide. For each selected image, the hair area Mh is divided into a plurality of sub-areas, the hair growth directions of the sub-areas are consistent and smooth, the approximate hair direction angles of different hair intervals in each hair style picture are measured by taking a fixed two-dimensional coordinate space as a standard, and the directions of the intervals are labeled; after processing, each hair style image in the database has two label maps, a binary label map Mh (for representing the hair template) and a direction label map Md (for representing the direction of the long head in the hair region). Meanwhile, in order to search the database in the later work more conveniently, the K-means is used for classifying the hair style pictures, and the hair style pictures are roughly divided into four classes according to the length of the hair style, wherein the four classes are roughly long hair, medium and long hair, short hair and extremely short hair.
Step 03 relates to a method for searching and screening a hair style database. Since the introduction in the above step 02 can classify the hair style according to the hair style picture, the hair style edited by the user can be classified by the K-means method. For example, the hair style designed by the user belongs to medium and long hair, the picture of the medium and long hair is preferentially screened when the picture is screened in the database, and if no matched hair style picture exists in the selected class, the picture is sequentially screened with the similar hair style class, so that the screening efficiency is greatly improved.
The step 03 specifically includes the following three steps:
step 0301. area matching: according to the alignment of the face region, the input image is aligned with a two-dimensional reference coordinate system, and the estimation of the hair part is further performed. The hair region | Ha | in the editing interface is then compared to the hair mask region of the filter sample | Hs |. Candidate samples ranging (0.8Ha, 1.25 Ha) were retained. This step will exclude a large number of candidates. Candidate samples with area ratios between 0.8 and 1.25 are subjected to the second round of comparison;
step 0302. hairline structure matching: thumbnail matching is then performed for candidate samples that pass area screening. Only boundary similarities or similar hair directions will be selected in the screening of this section. For each candidate sample, hair style region mask Hs of screened samples compared by the invention*And hair style area mask Ha of editing image*. The invention is according to Ha*Boundary ofTo compute a sample mask Hs*And the sample Ha*The distance between them, as shown in equation (1):
Figure 499522DEST_PATH_IMAGE001
(formula 1)
In formula 1, Hs*、Ha*Representing the difference in symmetry between the two masks. To measure the difference between the hair direction F drawn by the user and the hair direction G in the sample picture, both the direction information in the image and the hair direction are scaled to a uniform plane. The approximate direction of the different hair regions for each candidate sample is then compared to that direction and the difference calculated is shown in equation (2):
Figure 167264DEST_PATH_IMAGE002
(formula 2)
Wherein S isiIndicating different hair directions drawn by the user in F, Sj indicating different hair directions of the sample picture in G, P (S)i) An approximate angle representing the direction of the drawn hair, P (S) with reference to the angle of the line connecting the two end points of the hair directionj) Is the approximate angle of the hair of the sample image (which has been given in advance in the data set). If there are q hair directions drawn, there are m in the sample, q>m then default to (q-m) P (S)j) Is 0; q. q.s<m is considered as P (S)i)-P(Sj) =0, and finally screening out candidate samples;
and 0303, modifying the candidate sample to enable the candidate sample to accord with the hair style edited by the user. The whole flow is shown in fig. 2.
Step 04, a unified model space is needed, a three-dimensional character upper half body model is defined in the unified model space, then a human face alignment algorithm is used for aligning the edited candidate sample with the three-dimensional human body upper half body model, then the hair style in the edited candidate sample is segmented, and further a two-dimensional direction information image, a confidence image and a depth image of the human body model of the hair style are calculated.
The step 04 specifically comprises the following three steps:
step 0401, the present invention uses a bounding box as the boundary of the model space, where a reference standard 3D hair direction voxel field is generated and a two-dimensional hair direction and confidence map is captured;
the model space is defined by a bounding box defined according to the bust model and all database hair models, except for some very long hairs (manual selection). Then subdividing a 3D volume with the resolution of 128 x 96 in a bounding box;
step 0402, align the person in the edited candidate sample with the upper half of the body model of the three-dimensional human body, and then segment the hair style in the candidate sample;
step 0403 the present invention computes a two-dimensional direction and confidence map for the captured image using an iterative approach. In order to obtain a two-dimensional information mapping in a defined model space, the invention aims a virtual camera directly at the human body half-length model, the center of an image plane is superposed with the center of a boundary frame, and a two-dimensional image is captured in 1024/H orthogonal projection. Therefore, the size of the captured image is 1024 × 1024. The iteration of the database is random to 3-5, taking into account the difference in the quality of the real image. Furthermore, as mentioned above, the human body half model should also be considered as a condition for generating the network, since the hair is grown from the scalp and distributed around the human body half. The invention calculates the corresponding depth map of each pixel of the half-length image model in the image through the sight line tracking, obtains the distance from the half-length model of the human body to the camera, and divides the distance by D to ensure that the range is within [0,1 ]. Finally, the network input consists of all input information X consisting of a two-dimensional directional diagram, a confidence diagram and a human body half-length model depth diagram, and three-dimensional and two-dimensional directional vectors are coded in a color space.
In step 05, the application of deep learning in the three-dimensional hairstyle generation direction is deepened by researching and improving the Hair-GAN according to the VAE-GAN, and the accuracy and the generation efficiency of generating the three-dimensional direction voxel field are improved. And generating a three-dimensional hairstyle under the constraint and guidance of the generated three-dimensional direction voxel field.
The step 05 specifically comprises the following two steps:
step 0501. the present invention adds an Encoder (Encoder) on the basis of Hair-GAN. In the structure, VAE and GAN share one Decoder/Generator, the first half of the Encoder and Decoder/Generator form one AE, the second half of the Decoder/Generator and Discriminator form one GAN, the input information X is input into the Encoder, the Encoder automatically encodes the input information into Z, the characteristic distribution Z inputs the three-dimensional direction voxel field Y 'generated by the Decoder/Generator and the real training sample Y into the Discriminator for analysis and discrimination, and the Discriminator tends to make the Encoder adjust the parameters to make the generated characteristic distribution Z closer to the distribution of the real sample, and the Decoder/Generator generates Y' closer to the real sample. The structure of the modified frame is shown in figure 3;
step 0502, input information X generated in step 04 is input into a modified Hair-GAN frame in step 0501, a more accurate three-dimensional direction voxel field is generated, and after further optimization, three-dimensional Hair is generated under the constraint and guidance of the three-dimensional direction voxel field, and the generation of the three-dimensional hairstyle is completed. The hair style generation process is shown in figure 4.

Claims (6)

1. The invention aims to provide an editable three-dimensional hair style generation method, which is characterized by comprising the following steps: firstly, designing an image editing interface, and through the interface, a user can design a hairstyle shape and a hairline direction which are self-wanted; then, searching an image of which the shape and the hair direction are matched with the edited hair style in a hair style image database, and further performing detail processing on the searched image sample; and transmitting a two-dimensional directional diagram, a confidence diagram and the like of the processed picture sample as input information into an improved Hair-GAN framework, and converting the input two-dimensional information into a corresponding three-dimensional directional voxel field by the framework to guide the generation of a three-dimensional hairstyle.
2. The specific implementation of the invention is as follows:
(1) designing an image editing interface, wherein a user can design the shape, the hair direction and the like of the hairstyle which the user wants;
(2) processing a database with a large number of hair style pictures, and searching a hair style picture similar to a hair style edited by a user in the database as a candidate sample;
(3) modifying the details of the candidate sample searched in the step (2) to enable the candidate sample to be closer to the shape of the hairstyle designed by the user;
(4) improving a hairgan framework according to the structure of the VAE-GAN framework, and improving the generation efficiency and accuracy of the hairgan framework;
(5) generating corresponding two-dimensional direction information, a confidence map and a depth map from the modified Hair style picture sample as input information, inputting the input information into an improved Hair-GAN, and generating a three-dimensional direction voxel field corresponding to the Hair style by the improved Hair-GAN;
(6) and (5) generating three-dimensional hair under the constraint and guidance of the generated three-dimensional direction voxel field, and further completing the generation of the three-dimensional hairstyle.
3. The invention has the beneficial effects that:
(1) by using the editable three-dimensional hairstyle generation method, a user can flexibly edit the hairstyle in any shape desired by the user and generate the three-dimensional hairstyle, the interactivity and the flexibility of generating the three-dimensional hairstyle are improved, and the labor consumption in the manual modeling process of the three-dimensional character can be reduced;
(2) the hairgaN-Hair frame is improved according to the VAE-GAN frame, so that the efficiency of generating a three-dimensional direction voxel field can be improved, and the reality of generating a three-dimensional hairstyle can be improved.
4. A design for an image editing interface as recited in claim 1, wherein: in step 1), selecting a character upper body portrait as a reference in the interface for editing a hairstyle on the basis of the character upper body portrait; in the interface, the options related to the selectable hair style types are designed, the options comprise the selection of straight hair and curly hair, the option also comprises the selection of the color of the hair style,
so that the user can edit the hair style for use.
5. The method of processing and retrieving from a database of hair styles according to claim 1, wherein: in step 2), a binary hair region mask is obtained for each selected image by using stroke-based interactive segmentation and a matting tool of Paint Selection; and for each selected image, measuring the approximate hair direction angle of different hair intervals in each hair style picture, and marking the direction of the interval; after processing, each hair style picture in the database has two label mappings: a binary label mapping, a directional label mapping; meanwhile, the hair style pictures in the database are classified so as to facilitate retrieval more conveniently and quickly.
6. The improvement to the Hair-GAN framework as claimed in claim 1, wherein: in the step 4), the application of deep learning in the three-dimensional hairstyle generation direction is deepened by improving the Hair-GAN according to the VAE-GAN, the accuracy and the generation efficiency of the three-dimensional direction voxel field are improved, and the three-dimensional hairstyle is generated under the constraint and guidance of the generated three-dimensional direction voxel field.
CN202110365504.3A 2021-04-06 2021-04-06 Three-dimensional generation method based on improved editable Pending CN113112616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110365504.3A CN113112616A (en) 2021-04-06 2021-04-06 Three-dimensional generation method based on improved editable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110365504.3A CN113112616A (en) 2021-04-06 2021-04-06 Three-dimensional generation method based on improved editable

Publications (1)

Publication Number Publication Date
CN113112616A true CN113112616A (en) 2021-07-13

Family

ID=76713947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110365504.3A Pending CN113112616A (en) 2021-04-06 2021-04-06 Three-dimensional generation method based on improved editable

Country Status (1)

Country Link
CN (1) CN113112616A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
US20190035163A1 (en) * 2016-01-21 2019-01-31 Alison M. Skwarek Virtual hair consultation
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190035163A1 (en) * 2016-01-21 2019-01-31 Alison M. Skwarek Virtual hair consultation
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MENG ZHANG等: "Hair-GAN: Recovering 3D hair structure from a single image using generative adversarial networks", 《VISUAL INFORMATICS》 *
MENGLEI CHAI等: "AutoHair: Fully Automatic Hair Modeling from A Single Image", 《ACM TRANSACTIONS ON GRAPHICS》 *
WEIDONG YIN等: "Learning to Generate and Edit Hairstyles", 《MM "17: PROCEEDINGS OF THE 25TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *

Similar Documents

Publication Publication Date Title
CN105844706B (en) A kind of full-automatic three-dimensional scalp electroacupuncture method based on single image
CN109408653B (en) Human body hairstyle generation method based on multi-feature retrieval and deformation
US10665013B2 (en) Method for single-image-based fully automatic three-dimensional hair modeling
CN105513125B (en) Composograph generating means and method, the recording medium for executing this method
Liao et al. Automatic caricature generation by analyzing facial features
CN102622613B (en) Hair style design method based on eyes location and face recognition
CN101853523B (en) Method for adopting rough drawings to establish three-dimensional human face molds
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
Shen et al. Deepsketchhair: Deep sketch-based 3d hair modeling
CN106909875A (en) Face shape of face sorting technique and system
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
Zhong et al. Towards practical sketch-based 3d shape generation: The role of professional sketches
US11587288B2 (en) Methods and systems for constructing facial position map
KR20230097157A (en) Method and system for personalized 3D head model transformation
Olszewski et al. Intuitive, interactive beard and hair synthesis with generative models
Olsen et al. Image-assisted modeling from sketches.
KR20230085931A (en) Method and system for extracting color from face images
CN106228590B (en) A kind of human body attitude edit methods in image
CN103093488A (en) Virtual haircut interpolation and tweening animation producing method
Yang et al. Easy drawing: Generation of artistic chinese flower painting by stroke-based stylization
Bao et al. A survey of image-based techniques for hair modeling
Li et al. ColorSketch: A drawing assistant for generating color sketches from photos
CN117157673A (en) Method and system for forming personalized 3D head and face models
CN111127642A (en) Human face three-dimensional reconstruction method
CN112802031B (en) Real-time virtual trial sending method based on three-dimensional head tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210713

WD01 Invention patent application deemed withdrawn after publication