CN116894917A - Method, device, equipment and medium for generating three-dimensional hairline model of virtual image - Google Patents

Method, device, equipment and medium for generating three-dimensional hairline model of virtual image Download PDF

Info

Publication number
CN116894917A
CN116894917A CN202310732941.3A CN202310732941A CN116894917A CN 116894917 A CN116894917 A CN 116894917A CN 202310732941 A CN202310732941 A CN 202310732941A CN 116894917 A CN116894917 A CN 116894917A
Authority
CN
China
Prior art keywords
hair
node
target
hairline
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310732941.3A
Other languages
Chinese (zh)
Other versions
CN116894917B (en
Inventor
彭昊天
陈睿智
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310732941.3A priority Critical patent/CN116894917B/en
Publication of CN116894917A publication Critical patent/CN116894917A/en
Application granted granted Critical
Publication of CN116894917B publication Critical patent/CN116894917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a method, an apparatus, a device, and a medium for generating a three-dimensional hairline model of an avatar, which relate to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning, etc., and can be applied to scenes such as meta universe, digital people, etc. The method for generating the three-dimensional hairline model of the avatar comprises the following steps: aiming at the current hairline in an initial hairline model, determining a target hairline corresponding to the current hairline in the initial hairline model; determining a target node in the target hair; and based on the target node, carrying out grafting treatment on the current hairline so as to generate a target hairline model. The present disclosure may enhance the effect of the three-dimensional hairline model of the avatar.

Description

Method, device, equipment and medium for generating three-dimensional hairline model of virtual image
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning and the like, and can be applied to scenes such as metauniverse, digital people and the like, in particular to a method, a device, equipment and a medium for generating a three-dimensional hairline model of an avatar.
Background
The three-dimensional virtual image has wide application value in social, live broadcast, game and other scenes. The hairline modeling in the avatar is complex, the quantity of hairlines is large, and the cost for constructing the hairlines by a designer is high. The hairline reconstruction technology can reduce the cost of constructing the hairline model by a designer and improve the production efficiency of the hairline model.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and medium for generating a three-dimensional hair model of an avatar.
According to an aspect of the present disclosure, there is provided a method of generating a three-dimensional hairline model of an avatar, including: aiming at the current hairline in an initial hairline model, determining a target hairline corresponding to the current hairline in the initial hairline model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition; determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length of the target hair being longer than that of the current hair; and based on the target node, carrying out grafting treatment on the current hairline so as to generate a target hairline model.
According to another aspect of the present disclosure, there is provided a generation apparatus of a three-dimensional hairline model of an avatar, including: the first determining module is used for determining a target hair corresponding to a current hair in an initial hair model aiming at the current hair in the initial hair model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition; a second determining module for determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length of the target hair being longer than that of the current hair; and the generating module is used for carrying out grafting treatment on the current hairline based on the target node so as to generate a target hairline model.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the above aspects.
According to the technical scheme, the effect of the three-dimensional hairline model of the virtual image can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic view of a three-dimensional hairline model of an avatar provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a hairline provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target node on a target hair provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an application scenario provided according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 7 is a comparative schematic diagram of an initial hair model and a target hair model provided in accordance with an embodiment of the present disclosure;
FIG. 8 is a schematic diagram according to a third embodiment of the present disclosure;
fig. 9 is a schematic view of an electronic device for implementing a method of generating a three-dimensional hairline model for an avatar in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, there may be a case where the hairline length is uneven based on the three-dimensional hairline model generated by the hairline reconstruction technique, resulting in poor effect of the overall three-dimensional hairline model.
To enhance the effect of the three-dimensional hair model, the present disclosure provides the following embodiments.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. The embodiment provides a method for generating a three-dimensional hairline model of an avatar, which comprises the following steps:
101. aiming at the current hairline in an initial hairline model, determining a target hairline corresponding to the current hairline in the initial hairline model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition.
102. Determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length larger than that of the current hair on the target hair.
103. And based on the target node, carrying out grafting treatment on the current hairline so as to generate a target hairline model.
The initial hairline model can be generated by adopting a hairline reconstruction technology, and the hairlines in the initial hairline model can have the problem of uneven length.
For example, an initial hair model may be as shown in fig. 2, where n hair filaments may be included in the initial hair model, where n is the total number of hair filaments.
Current hair refers to the currently processed one of the hair strands in the initial hair model. Can be represented by the i-th hair, i=1, 2.
The target hair is one hair corresponding to the current hair in the initial hair model. Unlike the current hair, it can be represented by the j-th hair.
Wherein, len i <len j ,len i Is the length of the current hair (i-th hair), len j Is the length of the target hair (j-th hair).
In addition, the distance between the current hair strand and the target hair strand satisfies a preset condition, for example, the distance between the target hair strand and the current hair strand is minimum relative to other hair strands, and the distance is smaller than a preset threshold.
A hair, consisting of a plurality of nodes. For example, each hair strand includes m nodes. Each node of the hairline can be represented by a node vector, the node vector can be represented by three-dimensional coordinates of the nodes, and the sequence of the nodes is the connection relationship among the nodes.
For example, as shown in fig. 3, each hair strand may consist of m=5 nodes, the starting node being the root node, in the scalp position; the end node is a hair pin node.
Since the length of the target hair is longer than that of the current hair, as shown in fig. 4, the region of the target hair having a length longer than that of the current hair is referred to as a target region, and nodes within the target region are referred to as target nodes.
Because the length of the current hair is smaller than that of the target hair, in order to avoid the problem of uneven length, the current hair can be subjected to grafting treatment so as to ensure that the length of the current hair after treatment is consistent with that of the target hair.
The grafting treatment refers to increasing the length of the current hairline so that the increased length of the current hairline is consistent with the length of the target hairline.
Specifically, the grafting treatment can be performed on the current hairline based on the node vector corresponding to the target node, so as to obtain the current hairline with increased length.
By traversing each hair of the initial hair model, shorter hair can be grafted to obtain hair of uniform length, and the model consisting of the hair of uniform length can be called a target hair model.
In this embodiment, since the current hair is a hair with a shorter length, the length of the hair with a shorter length can be increased by performing grafting treatment on the current hair, so that the consistency of the overall hair length in the target hair model is ensured, and the effect of the three-dimensional hair model of the avatar is improved.
In order to better understand the embodiments of the present disclosure, application scenarios to which the embodiments of the present disclosure may be applied are described.
Fig. 5 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure. The scene comprises: a user terminal 501 and a server 502, the user terminal 501 may include: personal computers (Personal Computer, PCs), cell phones, tablet computers, notebook computers, smart wearable devices, and the like. The server 302 may be a cloud server or a local server, and the user terminal 501 and the server 502 may communicate using a communication network, for example, a wired network and/or a wireless network.
A three-dimensional hairline reconstruction technique can be adopted in the server 502 to generate an initial hairline model; and updating the initial hairline model to obtain a target hairline model. The updating process may specifically refer to grafting a shorter hair therein. Both the initial hair model and the target hair model may be displayed by the user terminal 501.
In combination with the above application scenario, the present disclosure further provides the following embodiments.
Fig. 6 is a schematic view of a third embodiment of the present disclosure, which provides a method for generating a three-dimensional hairline model of an avatar, the method comprising:
601. aiming at the current hairline in an initial hairline model, determining a target hairline corresponding to the current hairline in the initial hairline model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition.
Wherein, for the current hair, candidate hair strands with lengths greater than the current hair strand can be determined in the initial hair strand model; determining a distance between any node and any line segment of the candidate hair for any node in the current hair; and taking the candidate hairline with the minimum distance and smaller than a preset threshold value as the target hairline.
For example, if the current hair is represented by the ith hair, any hair (represented by the jth hair) other than the current hair in the initial hair model may be traversed, and if the length of the jth hair is greater than the length of the ith hair, the jth hair is considered as a candidate hair.
As shown in fig. 3, the hair consists of nodes, two adjacent nodes can form one line segment, and supposing that the 5 nodes in fig. 3 are respectively denoted by a (root node) and B, C, D, E (tip node), four line segments can be formed AB, BC, CD, DE, and the sum of the lengths of the four line segments is taken as the length of the corresponding hair. The length of each line segment may be calculated using the three-dimensional position coordinates of the corresponding node, for example, for line segment AB, may be calculated using the three-dimensional position coordinates of a and B.
After the candidate hair is determined, for any node of the current hair, the distance between the any node and each line segment of each candidate hair can be calculated, assuming that the distance is V ij And (3) representing that the distance between any node of the current hairline and any segment of any candidate hairline can be obtained. After the distance is obtained, a distance smaller than a preset threshold value can be selected as a candidate distance, the minimum distance is selected from the candidate distances, and the candidate hair corresponding to the minimum distance is used as a target hair.
In this embodiment, since the hair closer to the current hair is generally used for grafting in practice, the accuracy of the target hair can be improved and the grafting effect can be improved by selecting the candidate hair with the minimum distance and less than the preset threshold as the target hair.
602. Determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length larger than that of the current hair on the target hair.
The distance between the hair tip node of the current hair and any line segment of the target hair can be determined, and the line segment with the minimum distance is used as a candidate line segment; determining demarcation nodes in the two nodes corresponding to the candidate line segments; and taking a node between the tip node and the demarcation node of the target hairline as the target node.
For example, the nodes of the target hair strand include A, B, C, D, E, and any segment of the target hair strand refers to any segment of any two adjacent nodes, namely any one of AB, BC, CD, DE.
Assuming that the tip node of the current hair is E0, the distance between E0 and any one of the four line segments (AB, BC, CD, DE) can be calculated, and assuming that the distance between E0 and CD is the smallest, the line segment CD is a candidate line segment; thereafter, a demarcation node may be determined among the two nodes (C, D) corresponding to the candidate line segment, and assuming that the node D is the demarcation node, a node between the node D and the node of the tip of the target hair (i.e., the node E) is taken as a target node, i.e., the node D and the node E are taken as target nodes.
In this embodiment, since the target node is a node in the target area, and the target area is an area with a length greater than that of the current hairline, that is, an area between the boundary node and the tip node of the target hairline, the target node can be determined efficiently and accurately by using the node between the tip node and the boundary node of the target hairline as the target node.
Determining demarcation nodes in two nodes corresponding to the candidate line segment, and determining the distance between the tip node of the current hairline and each of the two nodes of the candidate line segment; and taking the node with the smallest distance as a demarcation node.
For example, if the candidate line segment is the line segment CD, the distance V between the tip node E0 and the node C of the current hair can be calculated E0-C And the distance V between the node E0 and the node D E0-D Suppose V E0-D <V E0-C And taking the node D as a demarcation node.
In this embodiment, the demarcation node of the target hair is generally closest to the tip node of the current hair, so that the accuracy of the demarcation node can be improved by determining the demarcation node based on the distance between the nodes.
603. And performing downsampling processing on the original nodes of the current hairline to generate first nodes, wherein the number of the first nodes is a first number, and the first number is the number of nodes between the root node of the target hairline and the demarcation node.
Wherein the number of nodes per hair strand is the same, e.g. each hair strand comprises m nodes, the number of original nodes of the current hair strand is m.
Assuming that the node numbers are arranged in ascending order according to the direction from the root node to the tip node, i.e., the number of the root node is 1, the number of the tip node is m, and the number of the demarcation node is k, the first number is k.
Thus, the m original nodes of the current hair may be downsampled to k first nodes.
The downsampling described above may be performed using a bilinear interpolation algorithm.
604. And generating second nodes based on vector differences between node vectors corresponding to the target nodes, wherein the number of the second nodes is a second number, and the sum of the second number and the first number is the node number of the original nodes.
Where k first nodes can be obtained by downsampling, since the number of nodes per hair needs to be m, the (m-k) nodes can be generated based on the target node.
That is, the (m-k) nodes to be generated may be referred to as second nodes, which may be generated based on vector differences between node vectors corresponding to the target nodes.
For example, the sequence numbers of the target nodes are k+1, k+2,..m, the vector difference refers to the vector difference between two adjacent target nodes, which can be expressed as: vec k+1 ,vec k+2 ,...,vec m
For each second node, the sum of the vector difference and the corresponding second node of the previous second node can be used, and the initial value of the second node is the last first node.
For example, the second Node may be represented as a Node k+1 ,Node k+2 ,...,Node m Node then k+1 =Node k +vec k+1 ,Node k+2 =Node k+1 +vec k+2 ,...,Node m =Node m-1 +vec m
605. And carrying out combination treatment on the first node and the second node to obtain the current hairline after grafting treatment.
For example, the first node is represented as: node 1 ,Node 2 ,...,Node k The k first nodes are obtained by downsampling the original nodes of the current hairline;
the second node is denoted as: node k+1 ,Node k+2 ,...,Node m The (m-k) second nodes are obtained by vector differences corresponding to the target nodes;
and combining the first node and the second node to obtain a current hairline after grafting treatment, namely, the current hairline after grafting treatment consists of the following nodes: node 1 ,Node 2 ,...,Node k ,Node k+1 ,Node k+2 ,...,Node m
606. And generating the target hairline model based on the current hairline after grafting treatment.
The current hairline is any hairline in the initial hairline model, 601-605 is repeatedly executed by traversing each hairline in the initial hairline model, the current hairline after grafting treatment can be obtained, and the current hairline after grafting treatment is used as the hairline in the target hairline model.
In addition, for a current hair in the initial hair model, there may be no target hair having a length greater than the current hair, and then the current hair in the initial hair model may be taken as the hair in the target hair model. These are directly used as the current hair of the hair in the target hair model, and the current hair after grafting treatment is combined together to form the target hair model.
As shown in fig. 7, for the initial hair model 701, the hair length is uneven, and after the shorter hair is grafted, a target hair model 702 can be generated, and the hair length in the target hair model 702 is basically consistent.
In this embodiment, the original node of the current hairline is downsampled based on the demarcation node, so that a first node can be generated, a second node can be generated through the vector difference corresponding to the target node, and then the current hairline after grafting treatment can be obtained by combining the first node and the second node, so that the grafting treatment can be accurately and efficiently performed on the current hairline, and the target hairline model effect is improved.
Fig. 8 is a schematic diagram according to a third embodiment of the present disclosure. The present embodiment provides an apparatus for generating a three-dimensional hairline model of an avatar, as shown in fig. 8, the apparatus 800 including: a first determination module 801, a second determination module 802, and a generation module 803.
The first determining module 801 is configured to determine, for a current hair in an initial hair model, a target hair corresponding to the current hair in the initial hair model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition; a second determination module 802 is used to determine a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length of the target hair being longer than that of the current hair; the generating module 803 is configured to perform grafting processing on the current hair strand based on the target node, so as to generate a target hair strand model.
In this embodiment, since the current hair is a hair with a shorter length, the length of the hair with a shorter length can be increased by performing grafting treatment on the current hair, so that the consistency of the overall hair length in the target hair model is ensured, and the effect of the three-dimensional hair model of the avatar is improved.
In some embodiments, the first determining module 801 is further configured to: for the current hair, determining candidate hair wires with lengths greater than the current hair in the initial hair model; determining a distance between any node and any line segment of the candidate hair for any node in the current hair; and taking the candidate hairline with the minimum distance and smaller than a preset threshold value as the target hairline.
In this embodiment, since the hair closer to the current hair is generally used for grafting in practice, the accuracy of the target hair can be improved and the grafting effect can be improved by selecting the candidate hair with the minimum distance and less than the preset threshold as the target hair.
In some embodiments, the second determining module 802 is further configured to: determining the distance between the hair tip node of the current hair and any line segment of the target hair, and taking the line segment with the minimum distance as a candidate line segment; determining demarcation nodes in the two nodes corresponding to the candidate line segments; and taking a node between the tip node and the demarcation node of the target hairline as the target node.
In this embodiment, since the target node is a node in the target area, and the target area is an area with a length greater than that of the current hairline, that is, an area between the boundary node and the tip node of the target hairline, the target node can be determined efficiently and accurately by using the node between the tip node and the boundary node of the target hairline as the target node.
In some embodiments, the second determining module 802 is further configured to: determining the distance between the tip node of the current hair and each of the two nodes of the candidate line segment; and taking the node with the smallest distance as a demarcation node.
In this embodiment, the demarcation node of the target hair is generally closest to the tip node of the current hair, so that the accuracy of the demarcation node can be improved by determining the demarcation node based on the distance between the nodes.
In some embodiments, the generating module 803 is further configured to: performing downsampling processing on the original nodes of the current hairline to generate first nodes, wherein the number of the first nodes is a first number, and the first number is the number of nodes between the root node of the target hairline and the demarcation node; generating second nodes based on vector differences between node vectors corresponding to the target nodes, wherein the number of the second nodes is a second number, and the sum of the second number and the first number is the number of nodes of the original nodes; combining the first node and the second node to obtain a current hairline after grafting treatment; and generating the target hairline model based on the current hairline after grafting treatment.
In this embodiment, the original node of the current hairline is downsampled based on the demarcation node, so that a first node can be generated, a second node can be generated through the vector difference corresponding to the target node, and then the current hairline after grafting treatment can be obtained by combining the first node and the second node, so that the grafting treatment can be accurately and efficiently performed on the current hairline, and the target hairline model effect is improved.
It is to be understood that in the embodiments of the disclosure, the same or similar content in different embodiments may be referred to each other.
It can be understood that "first", "second", etc. in the embodiments of the present disclosure are only used for distinguishing, and do not indicate the importance level, the time sequence, etc.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic device 900 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 909 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data required for the operation of the electronic device 900 can also be stored. The computing unit 901, the ROM902, and the RAM903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above, for example, a method of generating a three-dimensional hairline model of an avatar. For example, in some embodiments, the method of generating a three-dimensional hair model of an avatar may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM902 and/or the communication unit 909. When the computer program is loaded into the RAM903 and executed by the computing unit 901, one or more steps of the above-described method for generating a three-dimensional hair model of an avatar may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the method of generating the three-dimensional hair model of the avatar in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-chips (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("VirtualPrivate Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (13)

1. A method of generating a three-dimensional hairline model of an avatar, comprising:
aiming at the current hairline in an initial hairline model, determining a target hairline corresponding to the current hairline in the initial hairline model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition;
determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length of the target hair being longer than that of the current hair;
and based on the target node, carrying out grafting treatment on the current hairline so as to generate a target hairline model.
2. The method of claim 1, wherein the determining, for a current hair in an initial hair model, a target hair corresponding to the current hair in the initial hair model comprises:
for the current hair, determining candidate hair wires with lengths greater than the current hair in the initial hair model;
determining a distance between any node and any line segment of the candidate hair for any node in the current hair;
and taking the candidate hairline with the minimum distance and smaller than a preset threshold value as the target hairline.
3. The method of claim 1, wherein the determining a target node in the target hair comprises:
determining the distance between the hair tip node of the current hair and any line segment of the target hair, and taking the line segment with the minimum distance as a candidate line segment;
determining demarcation nodes in the two nodes corresponding to the candidate line segments;
and taking a node between the tip node and the demarcation node of the target hairline as the target node.
4. A method according to claim 3, wherein said determining a demarcation node among the two nodes corresponding to the candidate line segment comprises:
determining the distance between the tip node of the current hair and each of the two nodes of the candidate line segment;
and taking the node with the smallest distance as a demarcation node.
5. The method of claim 3, wherein the grafting the current hair strand based on the target node to generate a target hair strand model comprises:
performing downsampling processing on the original nodes of the current hairline to generate first nodes, wherein the number of the first nodes is a first number, and the first number is the number of nodes between the root node of the target hairline and the demarcation node;
generating second nodes based on vector differences between node vectors corresponding to the target nodes, wherein the number of the second nodes is a second number, and the sum of the second number and the first number is the number of nodes of the original nodes;
combining the first node and the second node to obtain a current hairline after grafting treatment;
and generating the target hairline model based on the current hairline after grafting treatment.
6. An apparatus for generating a three-dimensional hairline model of an avatar, comprising:
the first determining module is used for determining a target hair corresponding to a current hair in an initial hair model aiming at the current hair in the initial hair model; the length of the current hair is smaller than that of the target hair, and the distance between the current hair and the target hair meets the preset condition;
a second determining module for determining a target node in the target hair; the target node is a node in a target area, and the target area is a hair area with the length of the target hair being longer than that of the current hair;
and the generating module is used for carrying out grafting treatment on the current hairline based on the target node so as to generate a target hairline model.
7. The apparatus of claim 6, wherein the first determination module is further to:
for the current hair, determining candidate hair wires with lengths greater than the current hair in the initial hair model;
determining a distance between any node and any line segment of the candidate hair for any node in the current hair;
and taking the candidate hairline with the minimum distance and smaller than a preset threshold value as the target hairline.
8. The apparatus of claim 6, wherein the second determination module is further to:
determining the distance between the hair tip node of the current hair and any line segment of the target hair, and taking the line segment with the minimum distance as a candidate line segment;
determining demarcation nodes in the two nodes corresponding to the candidate line segments;
and taking a node between the tip node and the demarcation node of the target hairline as the target node.
9. The apparatus of claim 8, wherein the second determination module is further to:
determining the distance between the tip node of the current hair and each of the two nodes of the candidate line segment;
and taking the node with the smallest distance as a demarcation node.
10. The apparatus of claim 8, wherein the generation module is further to:
performing downsampling processing on the original nodes of the current hairline to generate first nodes, wherein the number of the first nodes is a first number, and the first number is the number of nodes between the root node of the target hairline and the demarcation node;
generating second nodes based on vector differences between node vectors corresponding to the target nodes, wherein the number of the second nodes is a second number, and the sum of the second number and the first number is the number of nodes of the original nodes;
combining the first node and the second node to obtain a current hairline after grafting treatment;
and generating the target hairline model based on the current hairline after grafting treatment.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-5.
CN202310732941.3A 2023-06-20 2023-06-20 Method, device, equipment and medium for generating three-dimensional hairline model of virtual image Active CN116894917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310732941.3A CN116894917B (en) 2023-06-20 2023-06-20 Method, device, equipment and medium for generating three-dimensional hairline model of virtual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310732941.3A CN116894917B (en) 2023-06-20 2023-06-20 Method, device, equipment and medium for generating three-dimensional hairline model of virtual image

Publications (2)

Publication Number Publication Date
CN116894917A true CN116894917A (en) 2023-10-17
CN116894917B CN116894917B (en) 2024-10-18

Family

ID=88309593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310732941.3A Active CN116894917B (en) 2023-06-20 2023-06-20 Method, device, equipment and medium for generating three-dimensional hairline model of virtual image

Country Status (1)

Country Link
CN (1) CN116894917B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339475A (en) * 2011-10-26 2012-02-01 浙江大学 Rapid hair modeling method based on surface grids
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
CN113850904A (en) * 2021-09-27 2021-12-28 北京百度网讯科技有限公司 Method and device for determining hair model, electronic equipment and readable storage medium
US20220254080A1 (en) * 2021-02-05 2022-08-11 Algoface, Inc. Virtual hair extension system
CN116109721A (en) * 2022-12-22 2023-05-12 北京字跳网络技术有限公司 Hairline generating method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339475A (en) * 2011-10-26 2012-02-01 浙江大学 Rapid hair modeling method based on surface grids
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
US11030786B1 (en) * 2019-08-05 2021-06-08 Snap Inc. Hair styles system for rendering hair strands based on hair spline data
US20220254080A1 (en) * 2021-02-05 2022-08-11 Algoface, Inc. Virtual hair extension system
CN113850904A (en) * 2021-09-27 2021-12-28 北京百度网讯科技有限公司 Method and device for determining hair model, electronic equipment and readable storage medium
CN116109721A (en) * 2022-12-22 2023-05-12 北京字跳网络技术有限公司 Hairline generating method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116894917B (en) 2024-10-18

Similar Documents

Publication Publication Date Title
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN112562069B (en) Method, device, equipment and storage medium for constructing three-dimensional model
CN113627536B (en) Model training, video classification method, device, equipment and storage medium
CN112560996A (en) User portrait recognition model training method, device, readable storage medium and product
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113205495B (en) Image quality evaluation and model training method, device, equipment and storage medium
CN113641829B (en) Training and knowledge graph completion method and device for graph neural network
CN113052962A (en) Model training method, information output method, device, equipment and storage medium
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN114612600A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113850904A (en) Method and device for determining hair model, electronic equipment and readable storage medium
CN115631286A (en) Image rendering method, device, equipment and storage medium
CN114266937A (en) Model training method, image processing method, device, equipment and storage medium
CN112862934A (en) Method, apparatus, device, medium, and product for processing animation
CN116894917B (en) Method, device, equipment and medium for generating three-dimensional hairline model of virtual image
CN114860411B (en) Multi-task learning method, device, electronic equipment and storage medium
CN114078184B (en) Data processing method, device, electronic equipment and medium
CN113408304B (en) Text translation method and device, electronic equipment and storage medium
CN113361575B (en) Model training method and device and electronic equipment
CN114638919A (en) Virtual image generation method, electronic device, program product and user terminal
CN113947146A (en) Sample data generation method, model training method, image detection method and device
CN113327311A (en) Virtual character based display method, device, equipment and storage medium
CN113313049A (en) Method, device, equipment, storage medium and computer program product for determining hyper-parameters
CN114037814B (en) Data processing method, device, electronic equipment and medium
CN114071111B (en) Video playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant