CN115409922A - Three-dimensional hairstyle generation method and device, electronic equipment and storage medium - Google Patents

Three-dimensional hairstyle generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115409922A
CN115409922A CN202211047136.9A CN202211047136A CN115409922A CN 115409922 A CN115409922 A CN 115409922A CN 202211047136 A CN202211047136 A CN 202211047136A CN 115409922 A CN115409922 A CN 115409922A
Authority
CN
China
Prior art keywords
hair
style
data
hidden vector
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211047136.9A
Other languages
Chinese (zh)
Other versions
CN115409922B (en
Inventor
彭昊天
陈睿智
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211047136.9A priority Critical patent/CN115409922B/en
Publication of CN115409922A publication Critical patent/CN115409922A/en
Application granted granted Critical
Publication of CN115409922B publication Critical patent/CN115409922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The disclosure provides a three-dimensional hairstyle generation method and device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as virtual digital people and the meta universe. The implementation scheme is as follows: obtaining first hair style data corresponding to a preset first head model, wherein the first hair style data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair; encoding based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and obtaining a three-dimensional hairstyle corresponding to the first head model based on the first hairstyle hidden vector, the three-dimensional hairstyle comprising hairline data of each hairline in the second hairline set, the hairline data comprising coordinates of each node in a plurality of nodes on the hairline.

Description

Three-dimensional hairstyle generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of artificial intelligence, and in particular to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like, which may be applied to scenes such as virtual digital people, meta universe, and the like, and in particular to a method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product for generating a three-dimensional hairstyle.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
The three-dimensional virtual image has wide application value in user scenes such as social contact, live broadcast, games and the like. The three-dimensional virtual image is generated based on artificial intelligence, the virtual image is generated through a single human face image, the personalized virtual image customized for the user effectively meets the personalized requirements of the user, and the method has a wide application prospect.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been acknowledged in any prior art, unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a three-dimensional hair style generation method, an apparatus, an electronic device, a computer readable storage medium and a computer program product.
According to an aspect of the present disclosure, there is provided a three-dimensional hair style generation method, including: obtaining first hair style data corresponding to a preset first head model, wherein the first hair style data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair; encoding based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and obtaining a three-dimensional hairstyle corresponding to the first head model based on the first hairstyle hidden vector, wherein the three-dimensional hairstyle comprises hairline data of each hairline in a second hairline set, and the hairline data comprises coordinates of each node in a plurality of nodes on the hairline, and the first hairline set is different from the second hairline set.
According to another aspect of the present disclosure, there is provided a three-dimensional hair styling apparatus comprising: a first hair style data obtaining unit configured to obtain first hair style data corresponding to a preset first head model, the first hair style data including hair data of each hair in a first hair set, the hair data including coordinates of each node in a plurality of nodes on the hair; a first encoding unit, configured to encode based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and a three-dimensional hairstyle obtaining unit configured to obtain a three-dimensional hairstyle corresponding to the first head model based on the first hairstyle hidden vector, the three-dimensional hairstyle including hairline data of each of a second set of hairlines, the hairline data including coordinates of each of a plurality of nodes on the hairline, wherein the first set of hairlines is different from the second set of hairlines.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the method according to embodiments of the present disclosure when executed by a processor.
According to one or more embodiments of the present disclosure, a data-driven three-dimensional hair style generation technology may be implemented, which, with a small amount of hair data, enables reconstruction of three-dimensional hair, generation of different types of three-dimensional hair styles, and the like.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with embodiments of the present disclosure;
fig. 2 illustrates a flow diagram of a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 3 shows a flowchart of a process of encoding based on a plurality of hair data corresponding to a first set of hair in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a network of hair self-encoders in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 5 shows a flow chart of a process in which encoding based on hair data corresponding to each hair in a first set of hair may be implemented in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 6 shows a flowchart of a process of encoding based on a plurality of hidden hair vectors corresponding to a first set of hair in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of a network of hair style self-encoders in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 8A and 8B are schematic diagrams respectively illustrating an original hair style corresponding to a training data set of a hair style self-encoder network and a three-dimensional hair style generated according to the hair style self-encoder network in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 9 is a flowchart illustrating a process of obtaining a three-dimensional hairstyle corresponding to a first head model based on a first hairstyle hidden vector in a three-dimensional hairstyle generating method according to an embodiment of the present disclosure;
fig. 10 illustrates a flow chart of a three-dimensional hair style generation method according to an embodiment of the present disclosure;
fig. 11 is a flowchart illustrating a process of obtaining a target hair style hidden vector based on a first hair style hidden vector and a second hair style hidden vector in a three-dimensional hair style generation method according to an embodiment of the present disclosure;
FIG. 12 illustrates a block diagram of a three-dimensional hair styling apparatus according to an embodiment of the present disclosure;
FIG. 13 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the three-dimensional hair style generation method to be performed.
In some embodiments, the server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein, and is not intended to be limiting.
The user may receive the generated three-dimensional hairstyle using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptops), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems, such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablets, personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 can include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The database 130 may be of different types. In certain embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the databases in response to the commands.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the related art, a three-dimensional hairstyle of an avatar is often generated from a single face image. To achieve this capability, a single face image needs to be image parsed. However, a single face image lacks a lot of information, for example, the face image usually does not have the side surface of the hairstyle, the back surface of the hairstyle, and an invisible hair style area is hidden, so that a complete and accurate three-dimensional hairstyle cannot be generated.
According to an aspect of the present disclosure, a three-dimensional hairstyle generation method is provided. As shown in fig. 2, a three-dimensional hair style generation method 200 according to some embodiments of the present disclosure includes:
step S210: obtaining first hair style data corresponding to a preset first head model, wherein the first hair style data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair;
step S220: encoding based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and
step S230: obtaining a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector, wherein the three-dimensional hair style comprises hair data of each hair in a second hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair, and the first hair set is different from the second hair set.
And obtaining a first hair style hidden vector by encoding based on the hair data in the first hair style data, and obtaining a three-dimensional hair style based on the first hair style hidden vector, wherein the three-dimensional hair style comprises the hair data of each hair in the second hair line set, and the reconstruction of the three-dimensional hair style is realized, and the reconstructed three-dimensional hair style is characterized by the hair, namely the reconstruction of the three-dimensional hair is realized.
Meanwhile, the second hair set in the generated three-dimensional hair style is different from the first hair set, so that the three-dimensional hair style containing more hair can be generated based on a small amount of hair data, and image restoration (inpainting) reconstruction, super-resolution three-dimensional hair reconstruction, hair style editing and the like of the three-dimensional hair are realized.
In some embodiments, the first hairstyle data is obtained from a starting database, wherein the preset first head model is an arbitrary three-dimensional model of the head, for example, a head model determined by a user, which includes coordinates of various points on the surface of the head. The first head model can have a plurality of hairstyles, such as long straight hair, long curly hair, short hair, medium hair, partial hair, and the like, with different hairstyles having corresponding hairstyle data.
According to the embodiment of the disclosure, the characteristics of the hidden vector space of the hairstyle are extracted by utilizing the similar distance of each hairline data in the hairstyle data of a certain specific hairstyle on the first head model in the hidden vector space, the expression of the hairstyle in the hidden vector space is obtained, the hairstyle hidden vector is generated, and the three-dimensional hairstyle is generated based on the hairstyle hidden vector, so that the data-driven three-dimensional hairline generation becomes possible, and the method has a good effect on the compensation of the missing of the hairline information of a single face image.
In some embodiments, the first set of hair strands may include hair strands located on the front face of the first head model, may be hair strands located elsewhere on the first head model, or may include all of the hair strands on the first head model.
In some embodiments, each hair may be a node sequence consisting of a plurality of nodes arranged in sequence on the hair, and the hair data for each hair may include a coordinate sequence consisting of the coordinates of the respective nodes in the node sequence. For example, each hair includes a node sequence composed of m nodes arranged in sequence, where m is a positive integer; the hair data of the hair includes a coordinate series composed of the coordinates of the m nodes, that is, a coordinate series composed of m coordinates arranged in order.
In some embodiments, the plurality of nodes includes a root node located at the scalp of the head model and a tip node located at the end of the hair.
In some embodiments, the hair style hidden vector is obtained by directly encoding a plurality of hair data corresponding to the first set of hair.
The above process can be used for processing the hair style data with small data volume. However, the data volume of the hair data tends to be large because of the hair style data. For example, a first set of 1200 hairs, with 20 nodes per hair, has a data volume of 24000; in a neural network, if a fully-connected network layer is constructed, a 24000x24000 oversized matrix is formed, and the data volume is too large to process. Meanwhile, because the spatial relationship between the hair and the hair is not consistent, the convolutional neural network layer cannot be used for simplifying the network and processing large-scale data; finally, the hair style data cannot be processed, and a hair style hidden vector cannot be obtained.
In some embodiments, as shown in fig. 3, the encoding based on the plurality of hair data corresponding to the first hair set in step S220 includes:
step S310: encoding based on the hair data corresponding to each hair in the first hair set to obtain a hair implicit vector corresponding to the hair; and
step S320: and encoding based on a plurality of hair hidden vectors corresponding to the first hair set to obtain the first type hidden vector.
The method comprises the steps of coding hair data of a single hair to obtain a hair hidden vector, and obtaining a hair style hidden vector based on the hair hidden vector.
In some embodiments, encoding based on hair data corresponding to each hair in the first set of hair is achieved by a variational self-encoder technique.
Referring to fig. 4, a diagram of a network architecture of a hair auto-encoder is shown, according to some embodiments of the present disclosure. The self-encoder network 400 includes a self-encoder 410 and a self-decoder 420, and generates a hidden vector (packet code) by passing an input data through the encoder 410 and generating the hidden vector back to the original data through the decoder 420, so that the unsupervised training can be performed without a data tag.
In an embodiment according to the present disclosure, the input data of the hair encoder 410 is hair data HairStrand of the ith hair strand in the hair style data x x,i The output data is the hidden vector Latent of the ith hair x,i . Wherein, hairStrand x,i Including the coordinates of each Node on the ith hair in the hair style data x, node x,i,j Wherein i is a hair number, and i is more than or equal to 0 and less than or equal to n; j is the number of the node on the ith hair, and j is more than or equal to 0 and less than or equal to m; n is the number of the hairs in the first hair set, and n is a positive integer; m is the number of nodes on the ith hair, and m is a positive integer.
The training of the hair self-encoder network 400 is realized by inputting the hair data of n hairs in the hair style data into the hair self-encoder network 400 respectively, so that the hair self-encoder network 400 predicts n times for the n hairs. Wherein, the initial states of the hair encoder 410 and the hair decoder 420 are both randomly initialized networks, lastent x,i As the feature expression of i hair in the hair style x, the data is not available in the initial condition, and with the joint training of the hair encoder 410 and the hair decoder 420, the hidden vector space has the capability of extracting the hair style features, and gradually has the tension x,i
In some embodiments, for each hair in the first set of hair, each node coordinate in the hair data corresponding to the hair is encoded to obtain a hair implicit vector corresponding to the hair.
In some embodiments, the plurality of nodes on each of the first set of hair threads includes a hair root node, and as shown in fig. 5, the encoding based on the hair data corresponding to each of the first set of hair threads in step S310 includes:
step S510: for each hair in the plurality of hair, obtaining a position enhancement vector corresponding to the hair based on the coordinates of the hair root node of the hair, wherein the dimension of the position enhancement vector is greater than that of the coordinates of the hair root node; and
step S520: and for each hair in the plurality of hair, coding the position enhancement vector corresponding to the hair and the hair data of the hair to obtain the hair hidden vector corresponding to the hair.
The position enhancement vector is obtained based on the coordinates of the hair root node, position coding is achieved, the learnability of the hair data in the deep learning network is improved, meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, the position enhancement vector contains more position information, the information amount is increased, and the hair hidden vector obtained by coding based on the position enhancement vector and the hair data is more accurate.
With continued reference to fig. 4, before the hair encoder 410, a position encoder (Positional Encoding) 430 is further included, and the input data of the position encoder 430 is the coordinate Root of the hair Root node of the ith hair i And the output data is a position enhancement vector.
In some embodiments, after obtaining a plurality of hair hidden vectors corresponding to a plurality of hair in the first hair set, the plurality of hair hidden vectors are encoded to obtain a hair hidden vector.
In some embodiments, the plurality of nodes on each of the first set of hairs includes a hair root node, as shown in fig. 6, the encoding, based on the plurality of hair hidden vectors corresponding to the first set of hairs, in step 320 includes:
step S610: obtaining a position enhancement vector corresponding to each hair in the first hair set based on the coordinates of the hair root node of each hair in the first hair set, wherein the dimension of the position enhancement vector is greater than that of the coordinates of the hair root node; and
step S620: encoding a plurality of corresponding position enhancement vectors and a plurality of hair hidden vectors in the first hair set to obtain a first hair hidden vector.
The position enhancement vector is obtained based on the coordinates of the hair root node, position coding is achieved, the learnability of the hair data in the deep learning network is improved, meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, the position enhancement vector contains more position information, the information amount is increased, and the hair style hidden vector obtained by coding based on the position enhancement vector and each hair hidden vector is more accurate.
In some embodiments, encoding the corresponding plurality of position enhancement vectors and plurality of hair stegano vectors in the first set of hair is accomplished by a variational self-encoder technique.
Referring to fig. 7, a network architecture diagram of a hair style self-encoder is shown, in which a network 700 of hair style self-encoders includes a plurality of hair encoders 710, a plurality of position encoders 720 corresponding to the plurality of hair encoders 710, and a hair style encoder 730, in accordance with some embodiments of the present disclosure.
Wherein each hair encoder 710 may employ a hair encoder 410 in a hair encoder network 400 as shown in fig. 4. The input data to each position encoder 720 is the coordinates of the root node of the corresponding hair and the output data is the position enhancement vector. The input data of the hair style encoder 730 is a fusion vector of the hair hidden vector and the corresponding position enhancement vector of each of the plurality of hair, and the output data is the hair hidden vector Latent x
In some embodiments, the step S130 of obtaining a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector comprises: and decoding the first hairstyle hidden vector to obtain a three-dimensional hairstyle.
In some embodiments, decoding the first styling hidden vector comprises: firstly, carrying out first decoding on a first hair-type hidden vector to obtain a hair-type hidden vector corresponding to each hair in a second hair set; then, each of a plurality of hidden hair vectors corresponding to the first hair set is subjected to second decoding to obtain coordinates of each of a plurality of nodes on each hair.
With continued reference to fig. 7, the hair style self-encoder network 700 includes a hair style decoder 740, a plurality of hair style decoders 750. Wherein each hair decoder may employ a hair decoder as shown in fig. 4.
After obtaining the first hidden vector of hair style, the first hidden vector of hair style is input to the hair style decoder 740, after obtaining a plurality of hidden vectors of hair of a plurality of hair, the plurality of hidden vectors of hair are correspondingly input to the plurality of hair decoders 750, so that each hair decoder decodes the corresponding hidden vector of hair to obtain hair data HairStrand of corresponding hair x,i ', the hair dataIncluding the coordinates of each of the plurality of nodes on the hair.
In one embodiment according to the present disclosure, after the hair style self-coding network is trained, the training data set generates a hair style hidden vector through a hair style encoder and a hair style encoder in the hair style self-coding network, and then a three-dimensional hair style is inverted through a hair style decoder and a hair style decoder. As shown in fig. 8A and fig. 8B, the original hairstyle (as shown in fig. 8A) corresponding to each training data set is substantially identical to the three-dimensional hairstyle (as shown in fig. 8B) corresponding to the training data set obtained by the hairstyle self-coding network, so as to achieve good self-encoder performance.
In some embodiments, as shown in fig. 9, the obtaining a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector at step S130 includes:
step S910: obtaining a target hair style hidden vector based on the first hair style hidden vector; and
step S920: and decoding the target hair hidden vector to obtain the three-dimensional hair style.
After the target hair style hidden vector is obtained based on the first hair style hidden vector, the target hair style hidden vector is decoded, so that different three-dimensional hair styles can be obtained based on the first hair style data, and the extension of the hair style types is realized.
In some embodiments, the target hair style hidden vector may be a hair style hidden vector obtained after modifying the first hair style hidden vector. For example, the first hairstyle hidden vector is artificially modified to obtain a target hairstyle hidden vector.
In some embodiments, as shown in fig. 10, the three-dimensional hair style generation method according to the present disclosure further comprises:
step S1010: obtaining second hair style data corresponding to the first head model, the second hair style data comprising hair data for each of a third set of hair strands from the first head model, the hair data comprising coordinates for each of a plurality of nodes on the hair strand; and
step S1020: encoding a plurality of hair data corresponding to the third silk set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein step S810: based on the first hair style hidden vector, obtaining a target hair style hidden vector comprises:
and obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
Because the hair style data of each hair style under the same head model have similar characteristics, the obtained hair style hidden vector space distance is similar, and the data have continuity in the hidden vector space, so that the hair style hidden vector has the capability of editing and interpolating. By obtaining hair style data (first hair style data and second hair style data) of different hair styles under the same head model, obtaining hair style hidden vectors (first hair style hidden vector and second hair style hidden vector) corresponding to the different hair style data, and obtaining target hair style hidden vectors based on the different hair style hidden vectors, interpolation of the hidden vectors can be realized, and more types of hair styles different from the hair style corresponding to the first hair style data and the hair style corresponding to the second hair style data can be obtained.
In some embodiments, the first hair style hidden vector and the second hair style hidden vector are fused to obtain a target hair style hidden vector.
In some embodiments, as shown in fig. 11, obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector comprises:
step S1110: obtaining weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector respectively; and
step S1120: and determining the sum of the first hair style hidden vector and the second hair style hidden vector and the product of the weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector as the target hair style hidden vector.
And obtaining a target hair style hidden vector by obtaining the weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector respectively and based on the weighting coefficients, so that the interpolation of the hair style hidden vector is realized and the data processing amount is reduced.
In some embodiments, the sum of the weighting factor corresponding to the first hair style hidden vector and the weighting factor corresponding to the second hair style hidden vector is 1.
In some embodiments, decoding the target hair hidden vector to obtain the three-dimensional hairstyle comprises:
obtaining a hair hidden vector corresponding to each hair in the second hair set based on the target hair hidden vector; and
and for each hair in the second hair set, obtaining the coordinates of each node in a plurality of nodes on the hair based on the hidden hair vector corresponding to the hair.
The target hair style hidden vector is decoded to obtain the hair style hidden vector corresponding to each hair in the second hair set, and the second hair set is different from the first hair set, so that the hair data can be expanded.
In some embodiments, the number of hairs in the first set of hairs is 1000, and the number of hairs in the second set of hairs is 10000.
In some embodiments, the first set of hair strands is the hair strands on the front face of the first headform and the second set of hair strands is all the hair strands on the first headform.
In some embodiments, the target hair style hidden vector is input to a hair style decoder 740 as shown in fig. 7, obtaining a hair style hidden vector for each hair in the second set of hair.
In some embodiments, the plurality of nodes on each of the second set of hair wires includes a root node, and obtaining the coordinates of each of the plurality of nodes on the hair wire based on the hidden vector of the hair wire includes:
obtaining coordinates of a hair root node on the hair;
obtaining a position enhancement vector corresponding to the hairline based on the coordinates of the hair root node on the hairline, wherein the dimensionality of the position enhancement vector is greater than that of the coordinates of the hair root node; and
and obtaining the coordinates of each node in the plurality of nodes on the hair based on the position enhancement vector and the hair hidden vector corresponding to the hair.
The position enhancement vector is obtained based on the coordinates of the hair root node, position coding is achieved, the learnability of the hair data in the deep learning network is improved, meanwhile, the dimension of the position enhancement vector is larger than that of the coordinates, the position enhancement vector contains more position information, the information amount is increased, and the coordinates of all nodes on the corresponding hair obtained by decoding based on the position enhancement vector and the corresponding hair hidden vector are more accurate.
In some embodiments, coordinates of a root node of each hair in the second set of hair strands are obtained based on coordinates of respective points of the scalp portion on the first head model.
In some embodiments, the position enhanced vector and the hidden hair vector corresponding to each hair are fused to obtain a fused vector, and the fused vector is input to the hair decoder 750 shown in fig. 7 to obtain hair data of the hair, where the hair data includes coordinates of each node on the hair.
According to the three-dimensional hairstyle generation method disclosed by the invention, the deep learning capability for understanding the three-dimensional hairstyle data is effectively constructed, the data-driven three-dimensional hairstyle generation is possible, and the method has a good compensation effect on the missing hairstyle information in the three-dimensional hairstyle generated based on a single face image. Aiming at three-dimensional hairlines and deep learning developers, the technology is simple and effective, the three-dimensional hairline data is linked with the existing deep learning capacity, and the three-dimensional hairline research and development threshold is greatly reduced.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to another aspect of the present disclosure, there is also provided a three-dimensional hair styling apparatus. As shown in fig. 12, the apparatus 1200 includes: a first hair style data obtaining unit 1210 configured to obtain first hair style data corresponding to a preset first head model, the first hair style data including hair data of each hair in a first set of hair, the hair data including coordinates of each node in a plurality of nodes on the hair; a first encoding unit 1220, configured to encode based on a plurality of hair data corresponding to the first hair set, so as to obtain a first hair style hidden vector corresponding to the first hair data; and a three-dimensional hair style obtaining unit 1230 configured to obtain a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector, the three-dimensional hair style including hair data of each hair in a second set of hair, the hair data including coordinates of each node in a plurality of nodes on the hair, wherein the first set of hair is different from the second set of hair.
In some embodiments, the first encoding unit includes: a hair encoding unit configured to encode hair data corresponding to each hair in the first hair set to obtain a hair implicit vector corresponding to the hair; and a hair style encoding unit configured to encode based on a plurality of hair-line hidden vectors corresponding to the first hair-line set to obtain the first hair-line hidden vector.
In some embodiments, the plurality of nodes on each hair in the first set of hairs comprises a root node, the hair encoding unit comprising: a first enhancement unit configured to obtain, for each of the plurality of hair strands, a location enhancement vector corresponding to the hair strand based on coordinates of a root node of the hair strand, a dimension of the location enhancement vector being greater than a dimension of the coordinates of the root node; and the coding subunit is configured to code, for each of the plurality of hair strands, the position enhancement vector corresponding to the hair strand and the hair strand data of the hair strand to obtain a hair strand hidden vector corresponding to the hair strand.
In some embodiments, the plurality of nodes on each hair in the first set of hairs comprises a root node, the hair style encoding unit comprising: a second enhancement unit, configured to obtain a location enhancement vector corresponding to each hair in the first set of hair based on the coordinates of the root node of the hair, the dimension of the location enhancement vector being greater than the dimension of the coordinates of the root node; and an encoding subunit configured to encode the plurality of corresponding position enhancement vectors and the plurality of hair concealment vectors in the first hair set to obtain a first hair concealment vector.
In some embodiments, the three-dimensional hair style obtaining unit comprises: a target hair style hidden vector obtaining unit, configured to obtain a target hair style hidden vector based on the first hair style hidden vector; and the decoding unit is configured to decode the target hair hidden vector to obtain the three-dimensional hair style.
In some embodiments, further comprising: a second hair style data obtaining unit configured to obtain second hair style data corresponding to the first head model, the second hair style data including hair data of each of a third set of hair from the first head model, the hair data including coordinates of each of a plurality of nodes on the hair; the second encoding unit is configured to encode a plurality of hair data corresponding to the third silk set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein, the target hair style hidden vector obtaining unit comprises: a target hair style hidden vector obtaining subunit configured to obtain the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
In some embodiments, the target hair style hidden vector obtaining subunit comprises: a weighting coefficient obtaining unit, configured to obtain weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector respectively; and an obtaining subunit, configured to determine, as the target hair style hidden vector, a sum of products of the first and second hair style hidden vectors and respective corresponding weighting coefficients.
In some embodiments, the decoding unit comprises: a hair style decoding unit configured to obtain a hair silk hidden vector corresponding to each hair silk in the second hair silk set based on the target hair style hidden vector; and a hair decoding unit configured to obtain, for each hair in the second set of hair, coordinates of each of a plurality of nodes on the hair based on a hair hidden vector corresponding to the hair.
In some embodiments, the plurality of nodes on each of the second set of hairs comprises a root node, the hair decoding unit comprising: the third enhancement unit is configured to obtain the coordinates of the hair root node on the hair, and obtain a position enhancement vector corresponding to the hair based on the coordinates of the hair root node on the hair, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the hair root node; and the decoding subunit is configured to obtain the coordinates of each node in the plurality of nodes on the hair based on the position enhancement vector and the hair implicit vector corresponding to the hair.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 13, a block diagram of a structure of an electronic device 1300, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the electronic device 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data necessary for the operation of the electronic apparatus 1300 can also be stored. The calculation unit 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
A number of components in the electronic device 1300 are connected to the I/O interface 1305, including: input section 1306, output section 1307, storage section 1308, and communication section 1309. The input unit 1306 may be any type of device capable of inputting information to the electronic device 1300, and the input unit 1306 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 1307 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1308 can include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1309 allows the electronic device 1300 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, an 802.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
Computing unit 1301 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. Computing unit 1301 performs various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1308. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 1300 via the ROM 1302 and/or the communication unit 1309. When loaded into RAM 1303 and executed by computing unit 1301, a computer program may perform one or more of the steps of method 200 described above. Alternatively, in other embodiments, the computing unit 1301 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, the various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (21)

1. A three-dimensional hair style generation method comprising:
obtaining first hair style data corresponding to a preset first head model, wherein the first hair style data comprises hair data of each hair in a first hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair;
encoding based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and
obtaining a three-dimensional hair style corresponding to the first head model based on the first hair style hidden vector, wherein the three-dimensional hair style comprises hair data of each hair in a second hair set, and the hair data comprises coordinates of each node in a plurality of nodes on the hair, and the first hair set is different from the second hair set.
2. The method of claim 1, wherein encoding based on a plurality of hair data corresponding to the first set of hair strands comprises:
encoding based on the hair data corresponding to each hair in the first hair set to obtain a hair implicit vector corresponding to the hair; and
and encoding based on a plurality of hair hidden vectors corresponding to the first hair set to obtain the first hair hidden vector.
3. The method of claim 2, wherein the plurality of nodes on each of the first set of hair strands comprises a root node, the encoding based on hair strand data corresponding to each of the first set of hair strands comprising:
for each of the plurality of hair strands,
obtaining a position enhancement vector corresponding to the hairline based on the coordinates of the hair root node of the hairline, wherein the dimensionality of the position enhancement vector is greater than that of the coordinates of the hair root node; and
and coding the position enhancement vector corresponding to the hair and the hair data of the hair to obtain the hidden vector corresponding to the hair.
4. The method of claim 3, wherein the plurality of nodes on each of the first set of hair strands comprises a root node, and wherein the encoding based on the plurality of hidden hair strand vectors to which the first set of hair strands corresponds comprises:
obtaining a position enhancement vector corresponding to each hair in the first hair set based on the coordinates of the hair root node of each hair in the first hair set, wherein the dimension of the position enhancement vector is greater than that of the coordinates of the hair root node; and
encoding a plurality of corresponding position enhancement vectors and a plurality of hair hidden vectors in the first hair set to obtain a first hair hidden vector.
5. The method according to any of claims 1-4, wherein said obtaining a three-dimensional hair style corresponding to the first head form based on the first hair style hidden vector comprises:
obtaining a target hair style hidden vector based on the first hair style hidden vector; and
and decoding the target hair hidden vector to obtain the three-dimensional hair style.
6. The method of claim 5, further comprising:
obtaining second hair style data corresponding to the first head model, the second hair style data comprising hair data for each hair in a third set of hair from the first head model, the hair data comprising coordinates for each of a plurality of nodes on the hair; and
encoding a plurality of hair data corresponding to the third silk set to obtain a second hair style hidden vector corresponding to the second hair style data; and wherein the obtaining a target hair style hidden vector based on the first hair style hidden vector comprises:
and obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
7. The method of claim 6, wherein the obtaining the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector comprises:
obtaining weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector respectively; and
and determining the sum of the first hair style hidden vector and the second hair style hidden vector and the product of the weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector as the target hair style hidden vector.
8. The method of claim 5, wherein the decoding the target hair hidden vector to obtain the three-dimensional hair style comprises:
obtaining a hair hidden vector corresponding to each hair in the second hair set based on the target hair hidden vector; and
and for each hair in the second hair set, obtaining the coordinates of each node in the plurality of nodes on the hair based on the hidden vector of the hair corresponding to the hair.
9. The method of claim 8, wherein the plurality of nodes on each of the second set of hair strands includes a root node, and wherein obtaining the coordinates of each of the plurality of nodes on the hair strand based on the hidden vector of the hair strand comprises:
obtaining coordinates of a hair root node on the hair;
obtaining a position enhancement vector corresponding to the hair based on the coordinates of the hair root node on the hair, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the hair root node; and
and obtaining the coordinates of each node in the plurality of nodes on the hair based on the position enhancement vector and the hair hidden vector corresponding to the hair.
10. A three-dimensional hair styling apparatus comprising:
a first hair style data obtaining unit configured to obtain first hair style data corresponding to a preset first head model, the first hair style data including hair data of each hair in a first hair set, the hair data including coordinates of each node in a plurality of nodes on the hair;
a first encoding unit, configured to encode based on a plurality of hair data corresponding to the first hair set to obtain a first hair style hidden vector corresponding to the first hair data; and
a three-dimensional hairstyle obtaining unit configured to obtain a three-dimensional hairstyle corresponding to the first head mold based on the first hairstyle hidden vector, where the three-dimensional hairstyle includes hairline data of each of a second set of hairlines, the hairline data includes coordinates of each of a plurality of nodes on the hairline, and the first set of hairlines is different from the second set of hairlines.
11. The apparatus of claim 10, wherein the first encoding unit comprises:
a hair encoding unit configured to encode hair data corresponding to each hair in the first hair set to obtain a hair implicit vector corresponding to the hair;
a hair style encoding unit configured to perform encoding based on a plurality of hair-line hidden vectors corresponding to the first hair set to obtain the first hair-line hidden vector.
12. The apparatus of claim 11, wherein the plurality of nodes on each of the first set of hairs comprises a root node, the hair encoding unit comprising:
a first enhancement unit configured to obtain, for each of the plurality of hair strands, a location enhancement vector corresponding to the hair strand based on coordinates of a root node of the hair strand, a dimension of the location enhancement vector being greater than a dimension of the coordinates of the root node; and
and the coding subunit is configured to code, for each of the plurality of hair strands, the position enhancement vector corresponding to the hair strand and the hair strand data of the hair strand to obtain a hair strand hidden vector corresponding to the hair strand.
13. The apparatus according to claim 11, wherein the plurality of nodes on each of the first set of hair strands comprises a root node, the hair style encoding unit comprising:
a second enhancement unit, configured to obtain a location enhancement vector corresponding to each hair in the first set of hair based on the coordinates of the root node of the hair, the dimension of the location enhancement vector being greater than the dimension of the coordinates of the root node; and
a coding subunit configured to code a plurality of corresponding position enhancement vectors and a plurality of hair-style-hiding vectors in the first hair set to obtain a first hair-style-hiding vector.
14. The apparatus according to any one of claims 10-13, wherein the three-dimensional hairstyle obtaining unit comprises:
a target hair style hidden vector obtaining unit, configured to obtain a target hair style hidden vector based on the first hair style hidden vector; and
a decoding unit configured to decode the target hair hidden vector to obtain the three-dimensional hair style.
15. The apparatus of claim 14, further comprising:
a second hair style data obtaining unit configured to obtain second hair style data corresponding to the first head model, the second hair style data including hair data of each of a third set of hair from the first head model, the hair data including coordinates of each of a plurality of nodes on the hair; and
a second encoding unit, configured to encode a plurality of hair data corresponding to the third set of hair to obtain a second hair style hidden vector corresponding to the second hair style data; and the target hair style hidden vector acquisition unit comprises:
a target hair style hidden vector obtaining subunit configured to obtain the target hair style hidden vector based on the first hair style hidden vector and the second hair style hidden vector.
16. The apparatus of claim 15, wherein the target hair style hidden vector retrieving subunit comprises:
a weighting coefficient obtaining unit, configured to obtain weighting coefficients corresponding to the first hair style hidden vector and the second hair style hidden vector respectively; and
an obtaining subunit, configured to determine, as the target hair style hidden vector, a sum of products of the first and second hair style hidden vectors and respective corresponding weighting coefficients.
17. The apparatus of claim 14, wherein the decoding unit comprises:
a hair style decoding unit configured to obtain a hair silk hidden vector corresponding to each hair in the second hair set based on the target hair style hidden vector; and
a hair decoding unit configured to obtain, for each hair in the second set of hair, coordinates of each node in a plurality of nodes on the hair based on a hair hidden vector corresponding to the hair.
18. The apparatus of claim 17, wherein the plurality of nodes on each of the second set of hair wires comprises a root node, the hair wire decoding unit comprising:
the third enhancement unit is configured to obtain coordinates of a hair root node on the hair, and obtain a position enhancement vector corresponding to the hair based on the coordinates of the hair root node on the hair, wherein the dimension of the position enhancement vector is larger than that of the coordinates of the hair root node; and
and the decoding subunit is configured to obtain the coordinates of each node in the plurality of nodes on the hair based on the position enhancement vector and the hair implicit vector corresponding to the hair.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-9 when executed by a processor.
CN202211047136.9A 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium Active CN115409922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211047136.9A CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211047136.9A CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115409922A true CN115409922A (en) 2022-11-29
CN115409922B CN115409922B (en) 2023-08-29

Family

ID=84161724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211047136.9A Active CN115409922B (en) 2022-08-30 2022-08-30 Three-dimensional hairstyle generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115409922B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619981A (en) * 2022-12-20 2023-01-17 北京百度网讯科技有限公司 Three-dimensional hairstyle generation method and model training method
CN115661375A (en) * 2022-12-27 2023-01-31 北京百度网讯科技有限公司 Three-dimensional hairstyle generation method and device, electronic equipment and storage medium
CN116030185A (en) * 2022-12-02 2023-04-28 北京百度网讯科技有限公司 Three-dimensional hairline generating method and model training method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002553A (en) * 2018-08-08 2018-12-14 北京旷视科技有限公司 Construction method, device, electronic equipment and the computer-readable medium of Hair model
CN111161405A (en) * 2019-12-24 2020-05-15 东南大学 Three-dimensional reconstruction method for animal hair
US20200175757A1 (en) * 2018-12-04 2020-06-04 University Of Southern California 3d hair synthesis using volumetric variational autoencoders
CN111583384A (en) * 2020-04-13 2020-08-25 华南理工大学 Hair reconstruction method based on adaptive octree hair convolutional neural network
US20210124837A1 (en) * 2019-10-24 2021-04-29 At&T Intellectual Property I, L.P. Encoding and concealing information using deep learning
US20210158591A1 (en) * 2018-05-22 2021-05-27 Magic Leap, Inc. Computer generated hair groom transfer tool
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114882173A (en) * 2022-04-26 2022-08-09 浙江大学 3D monocular hair modeling method and device based on implicit expression

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210158591A1 (en) * 2018-05-22 2021-05-27 Magic Leap, Inc. Computer generated hair groom transfer tool
CN109002553A (en) * 2018-08-08 2018-12-14 北京旷视科技有限公司 Construction method, device, electronic equipment and the computer-readable medium of Hair model
US20200175757A1 (en) * 2018-12-04 2020-06-04 University Of Southern California 3d hair synthesis using volumetric variational autoencoders
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
US20210124837A1 (en) * 2019-10-24 2021-04-29 At&T Intellectual Property I, L.P. Encoding and concealing information using deep learning
CN111161405A (en) * 2019-12-24 2020-05-15 东南大学 Three-dimensional reconstruction method for animal hair
CN111583384A (en) * 2020-04-13 2020-08-25 华南理工大学 Hair reconstruction method based on adaptive octree hair convolutional neural network
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114882173A (en) * 2022-04-26 2022-08-09 浙江大学 3D monocular hair modeling method and device based on implicit expression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KEYU WU ET AL: "NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image Using Implicit Neural Representations", ARXIV, pages 1 - 11 *
张萌: "数据驱动的三维发型建模技术研究", 中国博士学位论文全文数据库 信息科技辑, no. 8, pages 5 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030185A (en) * 2022-12-02 2023-04-28 北京百度网讯科技有限公司 Three-dimensional hairline generating method and model training method
CN115619981A (en) * 2022-12-20 2023-01-17 北京百度网讯科技有限公司 Three-dimensional hairstyle generation method and model training method
CN115661375A (en) * 2022-12-27 2023-01-31 北京百度网讯科技有限公司 Three-dimensional hairstyle generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115409922B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN115409922B (en) Three-dimensional hairstyle generation method, device, electronic equipment and storage medium
CN113807440B (en) Method, apparatus, and medium for processing multimodal data using neural networks
CN114612749B (en) Neural network model training method and device, electronic device and medium
CN113963110B (en) Texture map generation method and device, electronic equipment and storage medium
CN112532748B (en) Message pushing method, device, equipment, medium and computer program product
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN115600646B (en) Language model training method, device, medium and equipment
CN114119935B (en) Image processing method and device
CN114880580A (en) Information recommendation method and device, electronic equipment and medium
CN115631251A (en) Method, apparatus, electronic device, and medium for generating image based on text
CN114998963A (en) Image detection method and method for training image detection model
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN115619981B (en) Three-dimensional hairstyle generation method and model training method
CN115170887A (en) Target detection model training method, target detection method and device thereof
CN114201043A (en) Content interaction method, device, equipment and medium
CN114119154A (en) Virtual makeup method and device
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN113722594A (en) Recommendation model training method, recommendation device, electronic equipment and medium
CN114120412B (en) Image processing method and device
CN116228897B (en) Image processing method, image processing model and training method
CN115345981B (en) Image processing method, image processing device, electronic equipment and storage medium
CN116385641B (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant