CN115409922B - Three-dimensional hairstyle generation method, device, electronic equipment and storage medium - Google Patents

Three-dimensional hairstyle generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115409922B
CN115409922B CN202211047136.9A CN202211047136A CN115409922B CN 115409922 B CN115409922 B CN 115409922B CN 202211047136 A CN202211047136 A CN 202211047136A CN 115409922 B CN115409922 B CN 115409922B
Authority
CN
China
Prior art keywords
hair
hairline
vector
data
hidden vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211047136.9A
Other languages
Chinese (zh)
Other versions
CN115409922A (en
Inventor
彭昊天
陈睿智
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211047136.9A priority Critical patent/CN115409922B/en
Publication of CN115409922A publication Critical patent/CN115409922A/en
Application granted granted Critical
Publication of CN115409922B publication Critical patent/CN115409922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The disclosure provides a three-dimensional hairstyle generation method, a device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as virtual digital people, metauniverse and the like. The implementation scheme is as follows: obtaining first hair type data corresponding to a preset first head model, wherein the first hair type data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each of a plurality of nodes on the hair; encoding based on a plurality of hairline data corresponding to the first hairline set to obtain a first hairline hidden vector corresponding to the first hairline data; and obtaining a three-dimensional hairstyle corresponding to the first hairstyle based on the first model hidden vector, the three-dimensional hairstyle including hair line data for each hair line in the second set of hair lines, the hair line data including coordinates for each of a plurality of nodes on the hair line.

Description

Three-dimensional hairstyle generation method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as virtual digital people, metauniverse and the like, in particular to a three-dimensional hairstyle generation method, a three-dimensional hairstyle generation device, electronic equipment, a computer readable storage medium and a computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
The three-dimensional virtual image has wide application value in social, live broadcast, game and other user scenes. Based on the three-dimensional virtual image generation of artificial intelligence, the virtual image is generated through a single face image, and the personalized virtual image customized for the user can effectively meet the personalized requirements of the user, and has wide application prospect.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a three-dimensional hair style generation method, apparatus, electronic device, computer readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided a three-dimensional hairstyle generating method, including: obtaining first hair type data corresponding to a preset first head model, wherein the first hair type data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair; encoding based on a plurality of hairline data corresponding to the first hairline set to obtain a first hairline hidden vector corresponding to the first hairline data; and obtaining a three-dimensional hairstyle corresponding to the first hairstyle based on the first hairstyle hidden vector, the three-dimensional hairstyle including hair line data for each of a second set of hair lines, the hair line data including coordinates for each of a plurality of nodes on the hair line, wherein the first set of hair lines is different from the second set of hair lines.
According to another aspect of the present disclosure, there is provided a three-dimensional hairstyle generating device comprising: a first hair type data obtaining unit configured to obtain first hair type data corresponding to a preset first head model, the first hair type data including hair data of each hair in a first hair set, the hair data including coordinates of each of a plurality of nodes on the hair; the first coding unit is configured to code based on a plurality of hairline data corresponding to the first hairline set so as to obtain a first hairline hidden vector corresponding to the first hairline data; and a three-dimensional hair style acquisition unit configured to acquire a three-dimensional hair style corresponding to the first hair model based on the first hair style hidden vector, the three-dimensional hair style including hair line data of each of a second hair line set including coordinates of each of a plurality of nodes on the hair line, wherein the first hair line set is different from the second hair line set.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to an embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method according to embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, a data-driven three-dimensional hairstyle generation technique may be implemented that enables reconstruction of three-dimensional hairpins, generation of different types of three-dimensional hairstyles, and so forth, with a small amount of hairpin data.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a three-dimensional hairstyle generation method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a process of encoding based on a plurality of hair data corresponding to a first hair set in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a hairline self-encoder network in a three-dimensional hairstyle generation method in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of a process by which encoding based on hair data corresponding to each hair in a first set of hair in a three-dimensional hair style generation method according to an embodiment of the present disclosure may be implemented;
FIG. 6 is a flow chart illustrating a process of encoding based on a plurality of hair hidden vectors corresponding to a first hair set in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of a hairstyle self-encoder network in a three-dimensional hairstyle generation method according to an embodiment of the present disclosure;
fig. 8A and 8B are diagrams illustrating an original hairstyle corresponding to a training data set of a hairstyle self-encoder network and a three-dimensional hairstyle generated from the hairstyle self-encoder network, respectively, in a three-dimensional hairstyle generation method according to an embodiment of the present disclosure;
fig. 9 is a flowchart illustrating a process of obtaining a three-dimensional hairstyle corresponding to a first head model based on a first hairstyle hidden vector in a three-dimensional hairstyle generation method according to an embodiment of the present disclosure;
FIG. 10 illustrates a flow chart of a three-dimensional hairstyle generation method in accordance with an embodiment of the present disclosure;
FIG. 11 is a flowchart illustrating a process of obtaining a target hair style hidden vector based on a first hair style hidden vector and a second hair style hidden vector in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 12 is a block diagram illustrating a structure of a three-dimensional hairstyle generating apparatus according to an embodiment of the present disclosure;
fig. 13 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the three-dimensional hairstyle generation method.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may receive the generated three-dimensional hairstyle using client devices 101, 102, 103, 104, 105 and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the related art, a three-dimensional hairstyle of an avatar is often generated from a single Zhang Ren face image. To achieve this capability, image resolution of a single Zhang Ren face image is required. However, a single face image lacks a lot of information, such as that the face image typically does not have a hairstyle side, a hairstyle back, and there is an area of the hairstyle that is obscured from view, such that a complete, accurate three-dimensional hairstyle cannot be created.
According to one aspect of the present disclosure, a three-dimensional hairstyle generation method is provided. As shown in fig. 2, a three-dimensional hairstyle generation method 200 according to some embodiments of the present disclosure includes:
step S210: obtaining first hair type data corresponding to a preset first head model, wherein the first hair type data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair;
step S220: encoding based on a plurality of hairline data corresponding to the first hairline set to obtain a first hairline hidden vector corresponding to the first hairline data; and
step S230: based on the first hair style hidden vector, obtaining a three-dimensional hair style corresponding to the first hair style, wherein the three-dimensional hair style comprises hair line data of each hair line in a second hair line set, the hair line data comprises coordinates of each node in a plurality of nodes on the hair line, and the first hair line set is different from the second hair line set.
The method comprises the steps of obtaining a first hairstyle hidden vector through encoding based on hairline data in first hairstyle data, and obtaining a three-dimensional hairstyle based on the first hairstyle hidden vector, wherein the three-dimensional hairstyle comprises hairline data of each hairline in a second hairline set, reconstruction of the three-dimensional hairstyle is achieved, and the reconstructed three-dimensional hairstyle is characterized by hairlines, namely reconstruction of the three-dimensional hairlines is achieved.
Meanwhile, the second hairline set in the generated three-dimensional hairline is different from the first hairline set, so that the three-dimensional hairline containing more hairlines can be generated based on a small amount of hairline data, and the image restoration (inpainting) reconstruction, the super-resolution three-dimensional hairline reconstruction, the hairline editing and the like of the three-dimensional hairline are realized.
In some embodiments, the first profile data is obtained from an open source database, wherein the preset first head model is a three-dimensional model of any head, e.g., a head model determined by a user, including coordinates of various points of the head surface. The first head mold may have a plurality of hair styles, such as long straight hair, long curly hair, short hair, medium hair, partial hair, etc., with different hair styles having corresponding hair style data.
According to the embodiment of the disclosure, the characteristics of the hidden vector space of the hairstyle are extracted by utilizing the fact that the distances of all hairline data in the hairstyle data of a specific hairstyle on the first head model are similar in the hidden vector space, the expression of the hairstyle in the hidden vector space is obtained, the hairstyle hidden vector is generated, the three-dimensional hairline is generated based on the hairstyle hidden vector, the three-dimensional hairline generation driven by data is enabled to be possible, and the method has a good effect on compensating the hairline information loss of a single face image.
In some embodiments, the first set of hair strands may include hair strands on the front face of the first head mold, or may be hair strands on other portions of the first head mold or may include all hair strands on the first head mold.
In some embodiments, each hair strand may be a sequence of nodes formed by a plurality of nodes on the hair strand arranged in a sequential order, and the hair strand data for each hair strand includes a sequence of coordinates formed by the coordinates of the respective nodes in the sequence of nodes. For example, each hair includes a node sequence of m nodes arranged in sequence, where m is a positive integer; the hairline data of the hairline includes a coordinate sequence composed of coordinates of the m nodes, that is, a coordinate sequence composed of m coordinates arranged in order.
In some embodiments, the plurality of nodes includes a root node located at a scalp location of the head mold, and a tip node located at the end of the hair.
In some embodiments, the hair style hidden vector is obtained by directly encoding a plurality of hair data corresponding to the first hair set.
The above procedure can be used in the processing of hairstyle data with a smaller amount of data. However, the data amount of hairline data in hairstyle data tends to be large. For example, a first set of 1200 hair strands, with a data size of 24000 in the case of 20 nodes per hair strand; in the neural network, if the fully connected network layer is constructed, an oversized matrix of 24000x24000 is formed, and the data volume is too large to process. Meanwhile, because the spatial relationship between the hairlines does not have consistency, a convolutional neural network layer cannot be used for simplifying the network to process large data; and finally, the hairstyle data cannot be processed, and the hairstyle hidden vector cannot be obtained.
In some embodiments, as shown in fig. 3, step S220, encoding based on the plurality of hair data corresponding to the first hair set includes:
step S310: encoding based on hair data corresponding to each hair in the first hair set to obtain hair hidden vectors corresponding to the hair; and
step S320: and encoding based on a plurality of hair hidden vectors corresponding to the first hair set to obtain the first hair hidden vector.
The method comprises the steps of encoding hair data of a plurality of single hair to obtain hair hidden vectors, obtaining hair style hidden vectors based on the hair hidden vectors, and obtaining the hair style hidden vectors due to the fact that the data size of the hair data of the single hair is small.
In some embodiments, encoding based on hair data corresponding to each hair in the first set of hair is achieved by a variation self-encoder technique.
Referring to fig. 4, a hairline self-encoder network block diagram is shown, according to some embodiments of the present disclosure. The hairline self-encoder network 400 includes a hairline encoder 410 and a hairline decoder 420, and generates a hidden vector (latent code) by passing an input data through the encoder 410, and generates back the original data through the decoder 420, so that the training can be performed without supervision without a data tag.
In an embodiment according to the present disclosure, the input data of the hair encoder 410 is the hair data HairStrand of the ith hair in the hair style data x x,i The output data is the hidden vector Latent of the hairline corresponding to the ith hairline x,i . Wherein, hairStrand x,i Including the coordinates of each Node on the ith hairline in hairstyle data x, node x,i,j Wherein i is hairline number, and i is more than or equal to 0 and less than or equal to n; j is the number of the node on the ith hairline, and j is more than or equal to 0 and less than or equal to m; n is the number of hair strands in the first hair set, and n is a positive integer; m is the number of nodes on the ith hair and m is a positive integer.
By inputting the hair data of n hair in the hair style data into the hair self-encoder network 400 respectively, the hair self-encoder network 400 predicts n times for the n hair, and realizes the hair self-encoder network 4 Training of 00. Wherein, the initial states of the hairline encoder 410 and the hairline decoder 420 are both random initialized networks, and Latent x,i As a feature expression of i hairstyle under hairstyle x, which is not initially provided with this data, the hidden vector space will have the ability to extract hairstyle features and gradually have the tension with the joint training of the hairstyle encoder 410 and the hairstyle decoder 420 x,i
In some embodiments, for each hair in the first hair set, each node coordinate in the hair data corresponding to the hair is encoded, and a hair hidden vector corresponding to the hair is obtained.
In some embodiments, the plurality of nodes on each hair in the first hair set includes a root node, and as shown in fig. 5, step S310 of encoding based on hair data corresponding to each hair in the first hair set includes:
step S510: for each hair of the plurality of hair, obtaining a position enhancement vector corresponding to the hair based on the coordinates of a hair root node of the hair, wherein the dimension of the position enhancement vector is greater than the dimension of the coordinates of the hair root node; and
step S520: and aiming at each hairline in the plurality of hairlines, encoding a position enhancement vector corresponding to the hairline and hairline data of the hairline so as to obtain a hairline hidden vector corresponding to the hairline.
And meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, so that the position enhancement vector contains more position information, the information quantity is increased, and the hairline hidden vector obtained by encoding based on the position enhancement vector and the hairline data is more accurate.
With continued reference to fig. 4, prior to the hair encoder 410, a position encoder (Positional Encoding) 430 is further included, the input data of the position encoder 430 being the coordinates Root of the Root node of the ith hair i The output data is a location enhancement vector.
In some embodiments, after obtaining a plurality of hair hidden vectors corresponding to a plurality of hair in the first hair set, the plurality of hair hidden vectors are encoded to obtain a hair style hidden vector.
In some embodiments, the plurality of nodes on each hair in the first hair set includes a root node, and as shown in fig. 6, step 320 of encoding based on the plurality of hair hidden vectors corresponding to the first hair set includes:
step S610: based on the coordinates of the root node of each hairline in the first hairline set, obtaining a position enhancement vector corresponding to the hairline, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the root node; and
Step S620: and encoding a plurality of corresponding position enhancement vectors and a plurality of hair hidden vectors in the first hair set to obtain a first hair hidden vector.
And meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, so that the position enhancement vector contains more position information, the information quantity is increased, and the hairstyle hidden vector obtained by encoding based on the position enhancement vector and each hairline hidden vector is more accurate.
In some embodiments, encoding the corresponding plurality of location enhancement vectors and the plurality of hair hidden vectors in the first hair set is achieved by a variational self-encoder technique.
Referring to fig. 7, a diagram of a hairstyle self-encoder network is shown in accordance with some embodiments of the present disclosure, wherein the hairstyle self-encoder network 700 includes a plurality of hairline encoders 710, a plurality of position encoders 720 corresponding to the plurality of hairline encoders 710, and a hairstyle encoder 730.
Wherein each hair encoder 710 may employ a hair encoder 410 in a hair encoder network 400 as shown in fig. 4. The input data of each position encoder 720 is the coordinates of the root node of the corresponding hair, and the output data is the position enhancement vector. The input data to hairstyle encoder 730 is in a plurality of hairlines The fusion vector of the hidden hairline vector and the corresponding position enhancement vector of each hairline, and the output data is the hidden hairline vector Latent x
In some embodiments, step S130, based on the first hair style hidden vector, obtaining a three-dimensional hair style corresponding to the first hair model includes: decoding the first hairstyle hidden vector to obtain a three-dimensional hairstyle.
In some embodiments, decoding the first-type hidden vector includes: firstly, performing first decoding on a first hair type hidden vector to obtain a hair type hidden vector corresponding to each hair in a second hair set; and then, performing second decoding on each hair hidden vector in the plurality of hair hidden vectors corresponding to the first hair set to obtain the coordinates of each node in the plurality of nodes on each hair.
With continued reference to fig. 7, the hairstyle self-encoder network 700 includes a hairstyle decoder 740, a plurality of hairline decoders 750. Wherein each hair decoder may employ a hair decoder as shown in fig. 4.
After obtaining the first hair type hidden vector, the first hair type hidden vector is input to the hair style decoder 740, and after obtaining a plurality of hair hidden vectors of a plurality of hair, the plurality of hair hidden vectors are correspondingly input to the plurality of hair decoders 750, so that each hair decoder decodes the corresponding hair hidden vector to obtain hair data hair strand of the corresponding hair x,i The hairline data includes coordinates of each of a plurality of nodes on the hairline.
In one embodiment according to the present disclosure, after training the hairstyle self-encoding network, the training dataset generates a hairstyle hidden vector through a hairstyle encoder and a hairstyle encoder in the hairstyle self-encoding network, and then inverts the three-dimensional hairstyle through a hairstyle decoder and a hairstyle decoder. As shown in fig. 8A and 8B, the original hairstyle (as shown in fig. 8A) corresponding to each training data set is substantially identical to the three-dimensional hairstyle (as shown in fig. 8B) corresponding to the training data set obtained from the hairstyle self-encoding network, achieving good self-encoder performance.
In some embodiments, as shown in fig. 9, step S130, based on the first hair style hidden vector, obtaining the three-dimensional hair style corresponding to the first hair model includes:
step S910: obtaining a target hair style hidden vector based on the first hair style hidden vector; and
step S920: and decoding the target hairstyle hidden vector to obtain the three-dimensional hairstyle.
Based on the first hairstyle hidden vector, the target hairstyle hidden vector is decoded after the target hairstyle hidden vector is obtained, so that different three-dimensional hairstyles can be obtained based on the first hairstyle data, and the expansion of the types of hairstyles can be realized.
In some embodiments, the target hair style hidden vector may be a hair style hidden vector obtained after modifying the first hair style hidden vector. For example, the first hair style hidden vector is modified manually to obtain the target hair style hidden vector.
In some embodiments, as shown in fig. 10, the three-dimensional hairstyle generation method according to the present disclosure further includes:
step S1010: obtaining second hair style data corresponding to the first hair model, the second hair style data comprising hair data from each hair in a third set of hair in the first hair model, the hair data comprising coordinates of each of a plurality of nodes on the hair; and
step S1020: encoding a plurality of hair data corresponding to the third hair set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein step S810: based on the first hair style hidden vector, obtaining a target hair style hidden vector includes:
and obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
The data of each hairstyle under the same head model has similar characteristics, so that the obtained hidden vector space distances of the hairstyles are similar, and the data have continuity in the hidden vector space, so that the hidden vector of the hairstyles has the capability of editing and interpolating. By obtaining hair style data (first hair style data and second hair style data) of different hair styles under the same head model and obtaining hair style hidden vectors (first hair style hidden vector and second hair style hidden vector) corresponding to the different hair style data, interpolation of the hidden vectors can be achieved by obtaining target hair style hidden vectors based on the different hair style hidden vectors, and accordingly more types of hair styles different from hair styles corresponding to the first hair style data and hair styles corresponding to the second hair style data are obtained.
In some embodiments, the first hair style hidden vector and the second hair style hidden vector are fused to obtain the target hair style hidden vector.
In some embodiments, as shown in fig. 11, obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector comprises:
step S1110: obtaining weighting coefficients corresponding to the first hairstyle hidden vector and the second hairstyle hidden vector respectively; and
step S1120: and determining the sum of products of the first hairstyle hidden vector and the second hairstyle hidden vector and the corresponding weighting coefficients as the target hairstyle hidden vector.
And obtaining the target hair style hidden vector based on the weighting coefficients by obtaining the weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector, so that the data processing capacity is reduced while the interpolation of the hair style hidden vector is realized.
In some embodiments, the sum of the weighting coefficient corresponding to the first hair-style hidden vector and the weighting coefficient corresponding to the second hair-style hidden vector is 1.
In some embodiments, decoding the target hair style hidden vector to obtain the three-dimensional hair style comprises:
based on the target hair style hidden vector, obtaining a hair hidden vector corresponding to each hair in the second hair set; and
And for each hair in the second hair set, obtaining the coordinates of each node in the plurality of nodes on the hair based on the hair hidden vector corresponding to the hair.
And decoding the target hair style hidden vector to obtain a hair hidden vector corresponding to each hair in the second hair set, wherein the second hair set is different from the first hair set, so that expansion of hair data can be realized.
In some embodiments, the number of hairs in the first set of hairs is 1000 and the number of hairs in the second set of hairs is 10000.
In some embodiments, the first set of hair strands is hair strands on the front face of the first head mold and the second set of hair strands is all hair strands on the first head mold.
In some embodiments, the target hair style hidden vector is input to a hair style decoder 740 as shown in fig. 7, obtaining a hair style hidden vector for each hair in the second set of hair.
In some embodiments, the plurality of nodes on each hair strand in the second hair strand set includes a root node, and the obtaining coordinates of each of the plurality of nodes on the hair strand based on the hair strand hidden vector corresponding to the hair strand includes:
obtaining coordinates of a root node on the hairline;
Based on the coordinates of the root node on the hairline, obtaining a position enhancement vector corresponding to the hairline, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the root node; and
and obtaining the coordinates of each node in the plurality of nodes on the hairline based on the corresponding position enhancement vector and the hairline hidden vector of the hairline.
And meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, so that the position enhancement vector contains more position information, the information quantity is increased, and the coordinates of each node on the corresponding hairline obtained by decoding based on the position enhancement vector and the corresponding hairline hidden vector are more accurate.
In some embodiments, the coordinates of the root node of each hair in the second set of hair strands are obtained based on the coordinates of various points on the scalp portion of the first head mold.
In some embodiments, the position enhancement vector and the hair hidden vector corresponding to each hair are fused to obtain a fused vector, and input to a hair decoder 750 as shown in fig. 7 to obtain hair data for the hair, where the hair data includes coordinates of various nodes on the hair.
According to the three-dimensional hairstyle generation method, the deep learning capability for understanding three-dimensional hairline data is effectively constructed, the three-dimensional hairline generation driven by data is possible, and the method has a good compensation effect on missing hairline information in the three-dimensional hairline generated based on a single face image. Aiming at three-dimensional hairline and deep learning developers, the technology is simple and effective, so that the three-dimensional hairline data is connected with the existing deep learning capacity, and the development threshold of the three-dimensional hairline is greatly reduced.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to another aspect of the present disclosure, there is also provided a three-dimensional hairstyle generating device. As shown in fig. 12, the apparatus 1200 includes: a first hair style data obtaining unit 1210 configured to obtain first hair style data corresponding to a preset first head model, the first hair style data including hair data of each hair in a first hair set, the hair data including coordinates of each of a plurality of nodes on the hair; a first encoding unit 1220 configured to encode based on a plurality of hair data corresponding to the first hair set, so as to obtain a first hair style hidden vector corresponding to the first hair type data; and a three-dimensional hairstyle acquisition unit 1230 configured to acquire a three-dimensional hairstyle corresponding to the first hairstyle based on the first hairstyle hidden vector, the three-dimensional hairstyle including hairline data of each hairline of a second hairline set including coordinates of each of a plurality of nodes on the hairline, wherein the first hairline set is different from the second hairline set.
In some embodiments, the first encoding unit comprises: a hairline coding unit configured to code based on hairline data corresponding to each hairline in the first hairline set to obtain a hairline hidden vector corresponding to the hairline; and a hairstyle encoding unit configured to encode based on a plurality of hairline hidden vectors corresponding to the first hairline set to obtain the first hairline hidden vector.
In some embodiments, the plurality of nodes on each hair strand in the first set of hair strands comprises a root node, the hair strand encoding unit comprising: a first enhancing unit configured to obtain, for each hair of the plurality of hair, a position enhancing vector corresponding to the hair based on coordinates of a root node of the hair, the dimension of the position enhancing vector being greater than the dimension of the coordinates of the root node; and an encoding subunit configured to encode, for each hair of the plurality of hair, a position enhancement vector corresponding to the hair and hair data of the hair to obtain a hair hidden vector corresponding to the hair.
In some embodiments, the plurality of nodes on each hair strand in the first set of hair strands comprises a root node, and the hair style encoding unit comprises: a second enhancing unit, configured to obtain a position enhancing vector corresponding to each hair wire based on the coordinates of the root node of each hair wire in the first hair wire set, wherein the dimension of the position enhancing vector is larger than the dimension of the coordinates of the root node; and a coding subunit configured to code the corresponding plurality of position enhancement vectors and the plurality of hair hidden vectors in the first hair set to obtain a first hair hidden vector.
In some embodiments, the three-dimensional hair style acquisition unit includes: a target hair style hidden vector acquisition unit configured to acquire a target hair style hidden vector based on the first hair style hidden vector; and a decoding unit configured to decode the target hair style hidden vector to obtain the three-dimensional hair style.
In some embodiments, further comprising: a second hair style data acquisition unit configured to acquire second hair style data corresponding to the first hair model, the second hair style data including hair style data from each of a third hair style set of the first hair model, the hair style data including coordinates of each of a plurality of nodes on the hair; and a second encoding unit configured to encode a plurality of hair data corresponding to the third hair set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein the target hair style hidden vector acquisition unit includes: a target hair style hidden vector acquisition subunit configured to obtain the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
In some embodiments, the target hair style implicit vector acquisition subunit comprises: a weighting coefficient obtaining unit configured to obtain weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector, respectively; and an acquisition subunit configured to determine, as the target hair style hidden vector, a sum of products of the first and second hair style hidden vectors and the respective corresponding weighting coefficients.
In some embodiments, the decoding unit comprises: a hairstyle decoding unit configured to obtain a hair hidden vector corresponding to each hair in the second hair set based on the target hairstyle hidden vector; and a hair decoding unit configured to obtain, for each hair in the second hair set, coordinates of each of a plurality of nodes on the hair based on a hair hidden vector corresponding to the hair.
In some embodiments, the plurality of nodes on each hair strand in the second set of hair strands includes a root node, the hair strand decoding unit comprising: a third enhancement unit configured to obtain coordinates of a root node on the hairline, and obtain a position enhancement vector corresponding to the hairline based on the coordinates of the root node on the hairline, where the dimension of the position enhancement vector is greater than the dimension of the coordinates of the root node; and a decoding subunit configured to obtain coordinates of each node of the plurality of nodes on the hair based on the location enhancement vector and the hair hidden vector corresponding to the hair.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 13, a block diagram of an electronic device 1300, which may be a server or a client of the present disclosure, will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the electronic device 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data required for the operation of the electronic device 1300 can also be stored. The computing unit 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in electronic device 1300 are connected to I/O interface 1305, including: an input unit 1306, an output unit 1307, a storage unit 1308, and a communication unit 1309. The input unit 1306 may be any type of device capable of inputting information to the electronic device 1300, the input unit 1306 may receive input numeric or character information, and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 1307 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1308 may include, but is not limited to, magnetic disks, optical disks. The communication unit 1309 allows the electronic device 1300 to exchange information/data with other devices through computer networks such as the internet and/or various telecommunication networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1301 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into RAM 1303 and executed by computing unit 1301, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, computing unit 1301 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (20)

1. A method of three-dimensional hairstyle generation comprising:
obtaining first hair type data corresponding to a preset first head model, wherein the first hair type data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair;
Encoding based on a plurality of hairline data corresponding to the first hairline set to obtain a first hairline hidden vector corresponding to the first hairline data; and
based on the first hair style hidden vector, obtaining a three-dimensional hair style corresponding to the first hair style, wherein the three-dimensional hair style comprises hair line data of each hair line in a second hair line set, the hair line data comprises coordinates of each node in a plurality of nodes on the hair line, and the first hair line set is different from the second hair line set.
2. The method of claim 1, wherein encoding based on the plurality of hair data corresponding to the first set of hair comprises:
encoding based on hair data corresponding to each hair in the first hair set to obtain hair hidden vectors corresponding to the hair; and
and encoding based on a plurality of hair hidden vectors corresponding to the first hair set to obtain the first hair hidden vector.
3. The method of claim 2, wherein the plurality of nodes on each hair in the first set of hair comprises a root node, the encoding based on hair data corresponding to each hair in the first set of hair comprising:
For each hair strand of the plurality of hair strands,
based on the coordinates of the root node of the hairline, obtaining a position enhancement vector corresponding to the hairline, wherein the position enhancement vector is obtained by encoding the coordinates of the root node of the hairline, and the dimension of the position enhancement vector is larger than that of the coordinates of the root node; and
and encoding the position enhancement vector corresponding to the hairline and hairline data of the hairline to obtain a hairline hidden vector corresponding to the hairline.
4. The method of claim 3, wherein the plurality of nodes on each hair in the first set of hair comprises a root node, the encoding based on the corresponding plurality of hair hidden vectors of the first set of hair comprising:
based on the coordinates of the root node of each hairline in the first hairline set, obtaining a position enhancement vector corresponding to the hairline, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the root node; and
and encoding a plurality of corresponding position enhancement vectors and a plurality of hair hidden vectors in the first hair set to obtain a first hair hidden vector.
5. The method of any of claims 1-4, wherein the obtaining a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector comprises:
Obtaining a target hair style hidden vector based on the first hair style hidden vector; and
and decoding the target hairstyle hidden vector to obtain the three-dimensional hairstyle.
6. The method of claim 5, further comprising:
obtaining second hair style data corresponding to the first hair model, the second hair style data comprising hair data from each hair in a third set of hair in the first hair model, the hair data comprising coordinates of each of a plurality of nodes on the hair; and
encoding a plurality of hair data corresponding to the third hair set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein said obtaining a target hair style hidden vector based on said first hair style hidden vector comprises:
and obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
7. The method of claim 6, wherein the obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector comprises:
obtaining weighting coefficients corresponding to the first hairstyle hidden vector and the second hairstyle hidden vector respectively; and
And determining the sum of products of the first hairstyle hidden vector and the second hairstyle hidden vector and the corresponding weighting coefficients as the target hairstyle hidden vector.
8. The method of claim 5, wherein the decoding the target hair style hidden vector to obtain the three-dimensional hair style comprises:
based on the target hair style hidden vector, obtaining a hair hidden vector corresponding to each hair in the second hair set; and
and for each hair in the second hair set, obtaining the coordinates of each node in the plurality of nodes on the hair based on the hair hidden vector corresponding to the hair.
9. The method of claim 8, wherein the plurality of nodes on each hair strand in the second set of hair strands comprises a root node, and wherein the obtaining coordinates for each of the plurality of nodes on the hair strand based on the hair strand hidden vector corresponding to the hair strand comprises:
obtaining coordinates of a root node on the hairline;
based on the coordinates of the root node on the hairline, obtaining a position enhancement vector corresponding to the hairline, wherein the position enhancement vector is obtained by encoding the coordinates of the root node of the hairline, and the dimension of the position enhancement vector is larger than that of the coordinates of the root node; and
And obtaining the coordinates of each node in the plurality of nodes on the hairline based on the corresponding position enhancement vector and the hairline hidden vector of the hairline.
10. A three-dimensional hair style generating device comprising:
a first hair type data obtaining unit configured to obtain first hair type data corresponding to a preset first head model, the first hair type data including hair data of each hair in a first hair set, the hair data including coordinates of each of a plurality of nodes on the hair;
the first coding unit is configured to code based on a plurality of hairline data corresponding to the first hairline set so as to obtain a first hairline hidden vector corresponding to the first hairline data; and
a three-dimensional hair style acquisition unit configured to obtain a three-dimensional hair style corresponding to the first hair model based on the first hair model hidden vector, the three-dimensional hair style including hair line data of each of a second hair line set including coordinates of each of a plurality of nodes on the hair line, wherein the first hair line set is different from the second hair line set.
11. The apparatus of claim 10, wherein the first encoding unit comprises:
A hairline coding unit configured to code based on hairline data corresponding to each hairline in the first hairline set to obtain a hairline hidden vector corresponding to the hairline;
and the hair style coding unit is configured to code based on a plurality of hair hidden vectors corresponding to the first hair set so as to obtain the first hair hidden vector.
12. The apparatus of claim 11, wherein the plurality of nodes on each hair strand in the first set of hair strands comprises a root node, the hair strand encoding unit comprising:
a first enhancing unit configured to obtain, for each hair of the plurality of hair, a position enhancing vector corresponding to the hair based on coordinates of a root node of the hair, the position enhancing vector being obtained by encoding coordinates of the root node of the hair, the dimension of the position enhancing vector being greater than the dimension of the coordinates of the root node; and
and the coding subunit is configured to code, for each hair of the plurality of hair, the corresponding position enhancement vector of the hair and hair data of the hair to obtain a hair hidden vector corresponding to the hair.
13. The apparatus of claim 11, wherein the plurality of nodes on each hair in the first set of hair strands comprises a root node, the hair style encoding unit comprising:
a second enhancing unit configured to obtain a position enhancing vector corresponding to each hair in the first hair set based on coordinates of a root node of the hair, the position enhancing vector being obtained by encoding the coordinates of the root node of the hair, the dimension of the position enhancing vector being greater than the dimension of the coordinates of the root node; and
and the coding subunit is configured to code the corresponding plurality of position enhancement vectors and the corresponding plurality of hair hidden vectors in the first hair set so as to obtain a first hair hidden vector.
14. The device according to any one of claims 10-13, wherein the three-dimensional hair style acquisition unit comprises:
a target hair style hidden vector acquisition unit configured to acquire a target hair style hidden vector based on the first hair style hidden vector; and
and the decoding unit is configured to decode the target hairstyle hidden vector so as to obtain the three-dimensional hairstyle.
15. The apparatus of claim 14, further comprising:
A second hair style data acquisition unit configured to acquire second hair style data corresponding to the first hair model, the second hair style data including hair style data from each of a third hair style set of the first hair model, the hair style data including coordinates of each of a plurality of nodes on the hair; and
a second encoding unit configured to encode a plurality of hair data corresponding to the third hair set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein the target hair style hidden vector acquisition unit includes:
a target hair style hidden vector acquisition subunit configured to obtain the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
16. The apparatus of claim 15, wherein the target hair style implicit vector acquisition subunit comprises:
a weighting coefficient obtaining unit configured to obtain weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector, respectively; and
and an acquisition subunit configured to determine, as the target hair style hidden vector, a sum of products of the first and second hair style hidden vectors and respective corresponding weighting coefficients.
17. The apparatus of claim 14, wherein the decoding unit comprises:
a hairstyle decoding unit configured to obtain a hair hidden vector corresponding to each hair in the second hair set based on the target hairstyle hidden vector; and
and the hair decoding unit is configured to obtain the coordinates of each node in the plurality of nodes on each hair according to the hair hidden vector corresponding to the hair aiming at each hair in the second hair set.
18. The apparatus of claim 17, wherein the plurality of nodes on each hair strand in the second set of hair strands comprises a root node, the hair strand decoding unit comprising:
a third enhancement unit configured to obtain coordinates of a root node on the hairline, obtain a position enhancement vector corresponding to the hairline based on the coordinates of the root node on the hairline, the position enhancement vector being obtained by encoding the coordinates of the root node of the hairline, the dimension of the position enhancement vector being greater than the dimension of the coordinates of the root node; and
and the decoding subunit is configured to obtain the coordinates of each node in the plurality of nodes on the hairline based on the corresponding position enhancement vector and the hairline hidden vector of the hairline.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202211047136.9A 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium Active CN115409922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211047136.9A CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211047136.9A CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115409922A CN115409922A (en) 2022-11-29
CN115409922B true CN115409922B (en) 2023-08-29

Family

ID=84161724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211047136.9A Active CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115409922B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030185A (en) * 2022-12-02 2023-04-28 北京百度网讯科技有限公司 Three-dimensional hairline generating method and model training method
CN115619981B (en) * 2022-12-20 2023-04-11 北京百度网讯科技有限公司 Three-dimensional hairstyle generation method and model training method
CN115661375B (en) * 2022-12-27 2023-04-07 北京百度网讯科技有限公司 Three-dimensional hair style generation method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002553A (en) * 2018-08-08 2018-12-14 北京旷视科技有限公司 Construction method, device, electronic equipment and the computer-readable medium of Hair model
CN111161405A (en) * 2019-12-24 2020-05-15 东南大学 Three-dimensional reconstruction method for animal hair
CN111583384A (en) * 2020-04-13 2020-08-25 华南理工大学 Hair reconstruction method based on adaptive octree hair convolutional neural network
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114882173A (en) * 2022-04-26 2022-08-09 浙江大学 3D monocular hair modeling method and device based on implicit expression

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019226549A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Computer generated hair groom transfer tool
US11074751B2 (en) * 2018-12-04 2021-07-27 University Of Southern California 3D hair synthesis using volumetric variational autoencoders
US11443057B2 (en) * 2019-10-24 2022-09-13 At&T Intellectual Property I, L.P. Encoding and concealing information using deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002553A (en) * 2018-08-08 2018-12-14 北京旷视科技有限公司 Construction method, device, electronic equipment and the computer-readable medium of Hair model
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
CN111161405A (en) * 2019-12-24 2020-05-15 东南大学 Three-dimensional reconstruction method for animal hair
CN111583384A (en) * 2020-04-13 2020-08-25 华南理工大学 Hair reconstruction method based on adaptive octree hair convolutional neural network
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114882173A (en) * 2022-04-26 2022-08-09 浙江大学 3D monocular hair modeling method and device based on implicit expression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image Using Implicit Neural Representations;Keyu Wu et al;arxiv;1-11 *

Also Published As

Publication number Publication date
CN115409922A (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN115409922B (en) Three-dimensional hairstyle generation method, device, electronic equipment and storage medium
CN113313650B (en) Image quality enhancement method, device, equipment and medium
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN112967356A (en) Image filling method and device, electronic device and medium
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN115600646B (en) Language model training method, device, medium and equipment
CN114119935B (en) Image processing method and device
CN116401462A (en) Interactive data analysis method and system applied to digital sharing
CN116245998A (en) Rendering map generation method and device, and model training method and device
CN114140547B (en) Image generation method and device
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN114119154A (en) Virtual makeup method and device
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN114880580A (en) Information recommendation method and device, electronic equipment and medium
CN114998963A (en) Image detection method and method for training image detection model
CN114201043A (en) Content interaction method, device, equipment and medium
CN115619981B (en) Three-dimensional hairstyle generation method and model training method
CN116385641B (en) Image processing method and device, electronic equipment and storage medium
CN116228897B (en) Image processing method, image processing model and training method
CN114120412B (en) Image processing method and device
CN115761855B (en) Face key point information generation, neural network training and three-dimensional face reconstruction method
CN116205819B (en) Character image generation method, training method and device of deep learning model
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant