CN116503519A - Hair treatment method, device, apparatus and readable storage medium - Google Patents

Hair treatment method, device, apparatus and readable storage medium Download PDF

Info

Publication number
CN116503519A
CN116503519A CN202211153608.9A CN202211153608A CN116503519A CN 116503519 A CN116503519 A CN 116503519A CN 202211153608 A CN202211153608 A CN 202211153608A CN 116503519 A CN116503519 A CN 116503519A
Authority
CN
China
Prior art keywords
hair
models
model
vertex
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211153608.9A
Other languages
Chinese (zh)
Inventor
李江城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211153608.9A priority Critical patent/CN116503519A/en
Publication of CN116503519A publication Critical patent/CN116503519A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a hair processing method, a device, equipment and a readable storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring a three-dimensional model of the virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models on the hair growth area; obtaining color information of any vertex on the hair-growing area in response to receiving a drawing operation for the vertex; dividing the hair-growing area into at least one sub-area based on the color information of each vertex; dividing the hair data corresponding to each first hair model on any subarea into a group to obtain a hair data group corresponding to the subarea; and adjusting the hair data group corresponding to the subarea. The speed of dividing the hair growth area is higher than the speed of grouping the hairs by manually selecting the hairs, so that the hairs of the virtual objects can be quickly grouped, the grouping speed is increased, and the processing efficiency is improved.

Description

Hair treatment method, device, apparatus and readable storage medium
Technical Field
Embodiments of the present application relate to the field of computer technologies, and in particular, to a hair processing method, device, apparatus, and readable storage medium.
Background
With the continuous development of computer technology, virtual objects are increasingly applied to scenes such as games, live broadcast, animation and the like. Since animal-like virtual objects and partial object-like virtual objects (e.g., brush-like virtual objects) may have a large number of hairs, the hairs need to be processed, thereby improving the fidelity of the virtual objects.
In the related art, a plurality of hairs of a virtual object may be displayed on a screen of an electronic device. In response to an operation of manually selecting at least one target hair from the hairs, a hair data set is created, the hair data set comprising hair data of the respective target hair. And adjusting the hair corresponding to the hair data set through the hair data set.
In general, the number of hairs of a virtual object is large, and therefore, it takes a long time to group hairs of a virtual object by manually selecting the hairs, resulting in a low hair treatment efficiency.
Disclosure of Invention
The application provides a hair treatment method, a device, equipment and a readable storage medium, which can improve hair treatment efficiency.
In one aspect, a hair treatment method is provided, the method comprising:
Obtaining a three-dimensional model of a virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models on the hair growth area;
for any one of the vertices on the hair-growth area, in response to receiving a drawing operation for the any one of the vertices, deriving color information for the any one of the vertices;
dividing the hair-growing area into at least one sub-area based on the color information of each vertex on the hair-growing area, wherein the color information of each vertex in any one sub-area is the same;
for any one subarea, dividing the hair data corresponding to each first hair model on the any one subarea into a group to obtain a hair data group corresponding to the any one subarea;
and adjusting the hair data group corresponding to any one of the subareas.
In another aspect, there is provided a hair treatment device, the device comprising:
an acquisition module for acquiring a three-dimensional model of a virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models on the hair growth area;
a determining module for, for any one of the vertices on the hair-growing area, obtaining color information of the any one of the vertices in response to receiving a drawing operation for the any one of the vertices;
A dividing module for dividing the hair-growing area into at least one sub-area based on the color information of each vertex on the hair-growing area, the color information of each vertex in any one sub-area being the same;
the grouping module is used for grouping the hair data corresponding to each first hair model on any one subarea into a group for any one subarea, so as to obtain a hair data group corresponding to any one subarea;
and the adjusting module is used for adjusting the hair data set corresponding to any one of the subareas.
In one possible implementation, the dividing module is configured to, for any vertex on the hair-growing area, obtain color information of the any vertex in response to receiving a drawing operation for the any vertex; the hair-growing area is divided into at least one sub-area based on the color information of each vertex on the hair-growing area, the color information of each vertex in any one sub-area being the same.
In one possible implementation, the color information of each vertex in any one of the sub-regions is the same;
the grouping module is used for determining target vertexes matched with any first hair model from various vertexes on the hair growth area for the any first hair model; transmitting the color information of the target vertex to any one of the first hair models to obtain the color information of any one of the first hair models; and dividing the hair data corresponding to the first hair model with the same color information into a group to obtain a hair data group corresponding to any one subarea.
In a possible implementation manner, the grouping module is configured to extract a reference point of the any one of the first hair models from hair data corresponding to the any one of the first hair models; a target vertex is determined from the respective vertices on the hair-growth area that matches the reference point of the any one of the first hair models.
In a possible implementation manner, the grouping module is configured to transfer the color information of the target vertex to the reference point of the any one of the first hair models, so as to obtain the color information of the reference point of the any one of the first hair models; and transmitting the color information of the reference point of any one of the first hair models to each point in the hair data corresponding to any one of the first hair models to obtain the color information of any one of the first hair models.
In a possible implementation manner, the adjusting module is configured to obtain a hair deletion parameter corresponding to the any one of the sub-regions, where the hair deletion parameter is used to characterize deletion of a second number of hair data from a first number of hair data, and the first number is greater than the second number; and deleting the hair data in the hair data group corresponding to any one of the subareas based on the hair deletion parameters corresponding to any one of the subareas.
In a possible implementation, the deleted hair data set corresponding to any one of the sub-regions includes hair data of the second hair model;
the apparatus further comprises:
a determining module, configured to determine, for any one of the second hair models, texture information of a target vertex that matches the any one of the second hair models;
and the transmission module is used for transmitting the texture information of the target vertex to any one of the second hair models to obtain the texture information of any one of the second hair models.
In a possible implementation manner, the transmitting module is configured to transmit texture information of the target vertex to a reference point of the any one of the second hair models, so as to obtain the texture information of the reference point of the any one of the second hair models; and transmitting the texture information of the reference point of any one second hair model to each point in the hair data corresponding to any one second hair model to obtain the texture information of any one second hair model.
In another aspect, an electronic device is provided, the electronic device comprising a processor and a memory, the memory storing at least one computer program, the at least one computer program being loaded and executed by the processor to cause the electronic device to implement any of the hair treatment methods described above.
In another aspect, there is also provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause an electronic device to implement any of the above-described hair treatment methods.
In another aspect, a computer program or a computer program product is provided, in which at least one computer program is stored, which is loaded and executed by a processor, to cause an electronic device to implement any of the hair treatment methods described above.
The technical scheme provided by the application at least brings the following beneficial effects:
according to the technical scheme, drawing operations for all vertexes on the hair growth area are received, color information of all vertexes is obtained, the hair growth area is divided into subareas based on the color information of all vertexes, hair data corresponding to a first hair model on the subareas are divided into a group, a hair data set corresponding to the subareas is obtained, and the hair data set is adjusted to achieve adjustment of hair corresponding to the hair data set. Because the vertexes on the hair growth area do not have the shielding phenomenon, and the hairs on the hair growth area are easy to have the shielding phenomenon, the speed for dividing the hair growth area by drawing the vertexes on the hair growth area is higher than the speed for grouping the hairs by manually selecting the hairs, so that the hairs of the virtual objects can be quickly grouped, the grouping speed is increased, and the processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an environment in which a hair treatment method according to an embodiment of the present application is implemented;
FIG. 2 is a flow chart of a hair treatment method provided in an embodiment of the present application;
FIG. 3 is a schematic illustration of a division of a hair-growth area into individual sub-areas provided in an embodiment of the present application;
FIG. 4 is a schematic view of a color distribution of a first hair model according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating color correspondence between vertices and reference points according to an embodiment of the present disclosure;
FIG. 6 is a schematic representation of color correspondence between a reference point and a first hair model provided in an embodiment of the present application;
FIG. 7 is a flow chart of a color transfer provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a method for deleting hair data according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating the transmission of texture information according to an embodiment of the present disclosure;
FIG. 10 is a flow chart of a hair treatment method provided in an embodiment of the present application;
FIG. 11 is an effect graph of a hair treatment result provided in an embodiment of the present application;
fig. 12 is a schematic structural view of a hair treatment device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic view of an implementation environment of a hair treatment method according to an embodiment of the present application, and as shown in fig. 1, the implementation environment includes a terminal device 101 and a server 102. The hair processing method in the embodiment of the present application may be performed by the terminal device 101, by the server 102, or by both the terminal device 101 and the server 102.
The terminal device 101 may be a smart phone, a game console, a desktop computer, a tablet computer, a laptop computer, a smart television, a smart car device, a smart voice interaction device, a smart home appliance, etc. The server 102 may be one server, or a server cluster formed by a plurality of servers, or any one of a cloud computing platform and a virtualization center, which is not limited in this embodiment of the present application. The server 102 may be in communication connection with the terminal device 101 via a wired network or a wireless network. The server 102 may have functions of data processing, data storage, data transceiving, and the like, which are not limited in the embodiments of the present application. The number of terminal devices 101 and servers 102 is not limited, and may be one or more.
The hair treatment method provided by the embodiment of the application can be realized based on Cloud Technology (Cloud Technology). Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
In the field of computer technology, virtual objects are increasingly being used in scenes such as games, live broadcast, animation, etc. Since some virtual objects can carry a large number of hairs, it is often necessary to treat the hairs.
In the related art, a plurality of hairs of a virtual object may be displayed on a screen of an electronic device. In response to an operation of manually selecting at least one target hair from the hairs, a hair data set is created comprising hair data of the respective target hair, and the hair data set is adjusted to achieve an adjustment of the hairs in the hair data set. Since the number of hairs of the virtual object is large, it takes a long time to group the hairs of the virtual object in the above manner, resulting in a low processing efficiency of the hairs.
The embodiment of the application provides a hair treatment method which can be applied to the implementation environment, can reduce the time required for grouping the hairs of the virtual objects, and improves the hair treatment efficiency. Taking the flowchart of the hair processing method provided in the embodiment of the present application as shown in fig. 2 as an example, for convenience of description, the terminal device 101 or the server 102 that performs the hair processing method in the embodiment of the present application is referred to as an electronic device, and the method may be performed by the electronic device. As shown in fig. 2, the method includes the following steps.
Step 201, a three-dimensional model of a virtual object is acquired, the three-dimensional model comprising a hair growth area and a plurality of first hair models over the hair growth area.
In the embodiment of the present application, a three-dimensional modeling technique may be used to model a three-dimensional model of a virtual object, where a modeling manner of the three-dimensional model is not limited herein. The three-dimensional model includes a hair growth area and a plurality of first hair models. The hair growth area is a local area on the surface of the three-dimensional model, on which a plurality of first hair models are present, and the first hair models are models obtained by modeling the hairs of the virtual object. For example, there are a plurality of hair models on the scalp region of the three-dimensional model surface, and thus, the hair growth region may be the scalp region of the three-dimensional model surface, and the first hair model may be the hair model.
It should be noted that the three-dimensional model may include at least one hair growth area, and a plurality of first hair models exist on each hair growth area. Any one of the hair-growth area and the respective first hair model on that hair-growth area may be treated according to the steps associated with fig. 2.
In the embodiment of the application, any one of the hair-growth areas may be displayed in the electronic device, the dividing operation of the hair-growth area may be acquired, and the hair-growth area may be divided into at least one sub-area based on the dividing operation. In one possible implementation, the step of dividing the hair-growth area into at least one sub-area comprises steps 202 to 203.
Step 202, for any vertex on the hair-growth area, color information of any vertex is obtained in response to receiving a drawing operation for any vertex.
In the embodiment of the application, the electronic device is provided with three-dimensional model processing software, and the hair growth area is displayed through the three-dimensional model processing software. The embodiment of the present application does not limit the three-dimensional model processing software, and is exemplified by Hu Dini (Houdini) software. The Houdini software is a Three-Dimensional (3D) model processing software based on a programming technology, and can display a Three-Dimensional model and support manual processing of the Three-Dimensional model.
The hair-growing area includes a plurality of vertices thereon. When the hair growth area is displayed by the three-dimensional model processing software, each vertex on the hair growth area is displayed. Alternatively, a drawing operation for a vertex can be achieved by manually selecting one color and clicking on any vertex on the hair-growth area, i.e., assigning the selected color to the vertex. Or, the drawing operation for any vertex on the hair growth area can be realized by manually clicking the vertex and setting the color of the vertex. The electronic device may receive and respond to a drawing operation for any one of the vertices, resulting in color information for that vertex that is used to characterize the manually selected color.
The manual selection of one color corresponds to the manual selection of the values of three channels of Red (Red, R), green (G), and Blue (B), and at this time, the color information of the vertex is the RGB value of the vertex. And setting the color of the vertex corresponds to drawing the color of the vertex in UV texture coordinates. The UV texture coordinate is a two-dimensional coordinate system and comprises a U direction and a V direction, wherein the U direction is a horizontal direction, and the V direction is a vertical direction. The UV coordinates of a point in the UV texture coordinates represent a UV texture value that may describe the color of the vertex. At this time, the color information of the vertex is the UV texture value of the vertex.
In general, in the case where the numerical values of the three channels of R, G, B are manually selected, the drawing operation for the vertex may be drawing any one of three colors of red, green, and blue for the vertex. The three colors of red, green and blue are adopted because red can be set as (1, 0) in the value. This is because the array obtained by multiplying (1, 0) by 255 can characterize red. Based on the same principle, green may be set to (0, 1, 0) and blue may be set to (0, 1). By setting the method, only the numerical value of any channel can be extracted, so that the color can be extracted, the subareas where the color is located can be extracted conveniently, and the processing efficiency of color transmission, texture transmission and the like is improved.
It will be appreciated that the drawing operation for the vertex may also be drawing any one of the colors other than the three colors red, green, and blue for the vertex. The sub-region where any color is located can be extracted by the values of the three channels later. Under the condition that the UV texture value is set manually, the subarea where any color is located can be extracted through the UV texture value later.
Step 203 of dividing the hair-growth area into at least one sub-area based on the color information of the respective vertices on the hair-growth area, the color information of the respective vertices in any of the sub-areas being the same.
After the color is drawn for each vertex in the hair-growth area, color information for each vertex can be obtained. For any two vertexes, if the color information of the two vertexes is the same, the two vertexes are classified into one first group, and if the color information of the two vertexes is different, the two vertexes are classified into two first groups. In this way, a plurality of first groups can be obtained, any one of which includes vertices of the same color information, and the partial area including all vertices in the first group in the hair-growth area is one sub-area. That is, one first group corresponds to one sub-region, and the number of sub-regions is the same as the number of first groups.
In another possible implementation, a plurality of vertices on the hair-growth area are clicked manually in sequence, a closed graphic frame is obtained based on the clicking order of the vertices, the area covered by the graphic frame is regarded as a sub-area, and all vertices in the area covered by the graphic frame are classified into a first group. In this case, one first group corresponds to one sub-region, and the number of sub-regions is the same as the number of first groups. By determining at least one graphical box, the hair growth area may be divided into at least one sub-area.
The vertex is classified into the first group, and the vertex data is added to the first group. The vertex data includes, but is not limited to, three-dimensional coordinates of the vertex, color information of the vertex, texture information of the vertex, and the like.
Referring to fig. 3, fig. 3 is a schematic view of dividing a hair growth area into sub-areas according to an embodiment of the present application. As can be seen from fig. 3, the division of the hair-growth area into red, green and blue sub-areas can be achieved by drawing red, green or blue for vertices on the hair-growth area and dividing the partial area of the hair-growth area containing vertices of the same color into one sub-area. The color of each vertex in the red sub-area is red, the color of each vertex in the green sub-area is green, and the color of each vertex in the blue sub-area is blue.
In step 204, the hair data corresponding to each first hair model on any one of the subregions is divided into a group, and a hair data group corresponding to any one of the subregions is obtained.
In the embodiment of the application, each first hair model corresponds to a set of hair data. The set of hair data includes three-dimensional coordinates of a plurality of points, serial numbers of the respective points, and the like. The serial number of each point and the three-dimensional coordinates of each point may characterize a first hair model. The shape of the first hair model is not limited in this embodiment, that is, the shape of the first hair model may be any straight line, curve, or the like.
After dividing any one of the sub-areas, the individual first hair models on that sub-area are divided into a group. Since each first hair model corresponds to a set of hair data, the grouping of the respective first hair models on a sub-region may be achieved by grouping the hair data corresponding to the respective first hair models on the sub-region. Wherein any one of the sub-regions corresponds to a hair data set comprising hair data corresponding to respective first hair models on the sub-region.
In one possible implementation, the color information of each vertex in any one of the sub-regions is the same. In this case, step 204 includes steps 2041 to 2043.
Step 2041, for any one of the first hair models, determining a target vertex matching any one of the first hair models from among the respective vertices on the hair growth area.
In modeling the three-dimensional model, since the first hair model needs to be modeled on the hair growth area, when any one of the first hair models is modeled, the first hair model is matched with a certain vertex on the hair growth area.
When the electronic device is acquiring the three-dimensional model, since each vertex in the hair growth area of the three-dimensional model surface has been matched with each first hair model on the hair growth area, the electronic device corresponds to acquiring the matching relationship between the vertex in the hair growth area and the first hair model on the hair growth area.
For any one of the first hair models, a target vertex that matches the first hair model may be determined from the respective vertices on the hair growth area based on a matching relationship between the vertices in the hair growth area and the first hair model on the hair growth area. Wherein the target vertex matching the first hair model is the vertex of the hair growth area closest to the reference point of the first hair model.
Optionally, step 2041 includes: extracting a reference point of any one of the first hair models from hair data corresponding to any one of the first hair models; a target vertex is determined from the vertices on the hair-growth area that matches the reference point of any one of the first hair models.
Typically, the hair data corresponding to the first hair model includes data of a plurality of points, the data of the points including three-dimensional coordinates of the points, serial numbers of the points, and the like. Optionally, any one of the first hair models corresponds to an array, which includes data of each point included in the hair data of the first hair model, and the array may be denoted as private.
When matching the first hair model with a certain vertex on the hair growth area, the reference point of the first hair model may be matched with the vertex. Illustratively, the distance between the reference point of the first hair model and each vertex on the hair-growth area is calculated based on the three-dimensional coordinates of the reference point of the first hair model and the three-dimensional coordinates of each vertex on the hair-growth area. The vertex corresponding to the minimum distance is determined as the vertex matching the reference point of the first hair model. In this case, when the electronic device acquires the three-dimensional model, it corresponds to acquiring a matching relationship between the vertex in the hair-growth area and the reference point of the first hair model on the hair-growth area.
For any one of the first hair models, the hair data corresponding to the first hair model includes data for a plurality of points. The reference point of the first hair model may be extracted from the plurality of points based on the data of the plurality of points. Illustratively, a reference sequence number is determined from the sequence numbers of the plurality of points, and the point corresponding to the reference sequence number is determined as the reference point of the first hair model. Optionally, the reference point of the first hair model is a point characterizing the root of the first hair model.
For example, the hair data corresponding to the first hair model includes 100 points of data, and serial numbers of the 100 points are 0 to 99, respectively. Connecting points with serial numbers 0 to 99, respectively, can characterize a hair from root to tip. Thus, the point with the sequence number 0 corresponds to the point of the root of the first hair model. The point with a sequence number of 0 (i.e. the point of the root) may be taken as a reference point for the first hair model.
It will be appreciated that the electronic device may create a set of points for storing reference points of the respective first hair model on the hair-growth area for subsequent transfer of color information and transfer of texture information. Alternatively, the set of points is denoted as rootpoint.
Next, a target vertex that matches a reference point of any one of the first hair models is determined from among the respective vertices on the hair-growth area based on a matching relationship between the vertices in the hair-growth area and the reference points of the first hair models on the hair-growth area. Wherein the target vertex matching the reference point of the first hair model is the vertex of the hair growth area closest to the reference point of the first hair model.
Step 2042, transmitting the color information of the target vertex to any one of the first hair models, and obtaining the color information of any one of the first hair models.
The color information of each vertex in each sub-area has been determined above, and the target vertex is a vertex in a sub-area determined from each vertex on the hair-growing area, so that the electronic device may also obtain the color information of the target vertex, and transfer the color information of the target vertex to the first hair model matched with the target vertex to obtain the color information of the first hair model. Wherein the color information of the first hair model characterizes the first hair model as having the same color as the target vertex.
In the embodiment of the application, the color information of each first hair model on the hair growth area can be stored in the first storage space of the electronic device.
Referring to fig. 4, fig. 4 is a schematic color distribution diagram of a first hair model according to an embodiment of the present application. Wherein the color information of the first hair model of a certain color in fig. 4 is transferred from the color information of the vertices in the color subregion shown in fig. 3. For example, the color information of the first hair model of green is transferred from the color information of the vertices in the green sub-area shown in fig. 3.
Optionally, step 2042 includes: transmitting the color information of the target vertex to the reference point of any one of the first hair models to obtain the color information of the reference point of any one of the first hair models; and transmitting the color information of the reference point of any one of the first hair models to each point in the hair data corresponding to any one of the first hair models to obtain the color information of any one of the first hair models.
In this embodiment of the present application, since the vertex in the hair growth area and the reference point of the first hair model on the hair growth area have already been matched, the color information of the target vertex may be transferred to the reference point of the first hair model matched with the target vertex based on the matching relationship between the vertex in the hair growth area and the reference point of the first hair model on the hair growth area, so as to obtain the color information of the reference point of the first hair model. The color information of the reference point of the first hair model may characterize that the reference point of the first hair model has the same color as the target vertex that matches the reference point.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating color correspondence between a vertex and a reference point according to an embodiment of the present disclosure. As can be seen from fig. 5 (1) and (2), the color distribution of the vertices is substantially the same as that of the reference points.
Then, color information of the reference point of any one of the first hair models is transferred to each point in the hair data corresponding to the first hair model, and color information of the first hair model is obtained. When transferred, the first hair model corresponds to an array, which includes data for each point included in the hair data of the first hair model. Color information of a reference point of the first hair model may be transferred to each point in the hair data corresponding to the first hair model in a loop according to the serial number of each point. Wherein the color information of the first hair model characterizes the first hair model as having the same color as the reference point of the first hair model.
Alternatively, the data of each point included in the array priority corresponding to any one of the first hair models may be extracted into one array pts using int pts [ ] =primpoints (0, @ prirnum) to transfer the color information of the reference point of the first hair model to each point of the first hair model in the array pts. Wherein, the data type of the data in the int characterization array pts is integer. primpoints are extraction functions used to extract the data of each point in the array privates into array pts. 0 represents a reference point in the array priority, which represents a point of the root. primnum characterizes the number of arrays pritive, i.e. the number of first hair models.
Referring to fig. 6, fig. 6 is a schematic view of color correspondence between a reference point and a first hair model according to an embodiment of the present application. As can be seen from fig. 6 (1) and (2), the color distribution of the reference points is substantially the same as that of the first hair model.
The above-described aspects of the method steps illustrate that the color information of the target vertex is transferred to the reference point of the first hair model, and that the color information of the reference point of the first hair model is transferred to the respective points of the first hair model. The process of transferring this color information is described more fully below in connection with fig. 7. Fig. 7 is a flow chart of a color transfer provided in an embodiment of the present application.
In one aspect, a hair-growth area may be obtained and an operation to draw color for each vertex on the hair-growth area may be received. The three-dimensional model of the virtual object is obtained in the same manner as the hair growth area is obtained, and thus, the hair growth area is obtained in the manner described in step 201. An implementation of the "receive the operation of drawing colors for the respective vertices on the hair-growth area" may be described in step 202, and will not be described here.
On the other hand, a plurality of first hair models may be acquired and reference points of the respective first hair models may be extracted. The three-dimensional model of the virtual object is obtained in the same manner as the plurality of first hair models are obtained, and thus, the manner of obtaining the plurality of first hair models can be described in step 201. The implementation manner of "extracting the reference points of the respective first hair models" may be described in step 2041, and will not be described herein.
Next, for any one of the first hair models, color information of a target vertex matching the reference point of the first hair model is transferred to the reference point of the first hair model, and color information of the reference point of the first hair model is transferred to each point of the first hair model. The implementation manner can be seen in the description of step 2042, and will not be described herein.
Thereafter, the color of each first hair model is displayed. Since the respective points of the first hair model correspond to the same color information, the color of the first hair model can be displayed based on the color information of the respective points of the first hair model by the three-dimensional model processing software, and the display effect is as shown in (2) of fig. 6.
Step 2043, dividing the hair data corresponding to the first hair model of the same color information into a group, and obtaining a hair data group corresponding to any one of the subareas.
For any two first hair models, if the color information of the two first hair models is the same, the hair data corresponding to the two first hair models are divided into the same hair data group. If the color information of the two first hair models is different, the hair data corresponding to the two first hair models are divided into two different hair data sets. Since the color of the first hair model on the same sub-area is the same, and the color of the first hair model corresponding to the hair data in the same hair data set is also the same, one hair data set corresponds to one sub-area.
Alternatively, red (1, 0), green (0, 1, 0), blue (0, 1) may be set. All hair data of one color are extracted through the numerical value of any channel, so that a hair data group corresponding to any sub-region is obtained. Illustratively, red hair data is extracted by the value of the first channel, according to the code shown below.
if(@Cd.r==1){
setpointgroup(0,“red”,@ptnum,1,“set”)
}
Where @ Cd.r represents the first channel. Setpointgroup is a function that creates a point group. 0 is a first input port characterizing hair data corresponding to respective first hair models on a hair growth area. red characterizes a set of points that includes hair data corresponding to a first hair model with a first channel of 1 (i.e., red). The @ ptnum characterizes the sequence number of the current point, which is the point on the first hair model. Reference numeral 1 denotes a second input port, which represents data of each vertex on the hair growth area. The set characterizes the command executed as a set command, and set is a programming language built in by houdini.
Through the above code, the hair data corresponding to the first hair model of red color can be placed in the dot group of red. Similarly, by using a similar code, the hair data corresponding to the first hair model of green may be placed in the green dot group, and the hair data corresponding to the first hair model of blue may be placed in the blue dot group, thereby realizing that the hair data corresponding to the first hair model of the same color information is divided into one group.
It will be appreciated that after the deletion process is performed on the hair data in the hair data set corresponding to each sub-region, and/or after the texture information of the vertex is transferred to the hair model (which may be the first hair model or the second hair model) matching the vertex, the hair data corresponding to the hair model of a certain color information may be extracted using the above code.
In step 205, an adjustment process is performed on the hair data set corresponding to any one of the sub-areas.
By adjusting the hair data set corresponding to any one of the subareas, the first hair model on the hair growth area can be adjusted in different areas, so that the first hair model on different areas can be controlled more carefully, the hair fidelity of each subarea on the three-dimensional model is improved, and the fidelity of the three-dimensional model of the virtual object is improved.
In one possible implementation, step 205 includes steps 2051 through 2052.
Step 2051, obtaining a hair deletion parameter corresponding to any one of the sub-regions, the hair deletion parameter being used to characterize the deletion of a second number of hair data from the first number of hair data, the first number being greater than the second number.
When the hair data sets corresponding to the sub-regions are obtained, the electronic device can acquire color information input by the object. Since one hair data set corresponds to one color information, the electronic device can determine the hair data set corresponding to the input color information, which is the hair data set corresponding to a certain sub-area. In this way, the electronic device can determine the hair data set corresponding to any one of the sub-regions.
For a hair data set corresponding to any one of the sub-regions, the electronic device may receive a hair deletion parameter entered by the subject. The hair deletion parameter is used for deleting hair data in the hair data group corresponding to the subarea.
Optionally, the hair deletion parameter comprises a first number and a second number. The electronic device may display, based on the hair data set corresponding to any one of the sub-regions, a first hair model corresponding to each of the hair data in the hair data set by the three-dimensional model processing software. In addition, the electronic device may further display a first number of input boxes and a second number of input boxes, so that the object inputs a first number of values in the first number of input boxes, inputs a second number of values in the second number of input boxes, and the first number of values is greater than the second number of values. When the electronic device receives the entered first number of values and second number of values, the electronic device also obtains a hair deletion parameter, and the hair deletion parameter is used to indicate that the second number of hair data is deleted from the first number of hair data.
For example, the object is entered 50 in a first number of input boxes and 13 in a second number of input boxes. In this case, the electronic device may determine that the hair deletion parameter is to delete 13 pieces of hair data from 50 pieces of hair data.
Optionally, the hair deletion parameter is a ratio of the second number to the first number, and the hair deletion parameter is data greater than 0 and less than 1 since the first number is greater than the second number. When the electronic device displays the first hair model corresponding to each hair data in the hair data set through the three-dimensional model processing software, the electronic device can also display an input box of a hair deletion parameter, so that the object can input the numerical value of the hair deletion parameter in the input box of the hair deletion parameter, and the electronic device can obtain the hair deletion parameter, wherein the hair deletion parameter is used for indicating that a second quantity of hair data is deleted from a first quantity of hair data.
For example, the object enters 0.9 in the input box of the hair deletion parameter, at which point the electronic device may determine that the hair deletion parameter is to delete 9 pieces of hair data from 10 pieces of hair data.
Step 2052, based on the hair deletion parameter corresponding to any one of the subregions, performs a deletion process on the hair data in the hair data group corresponding to any one of the subregions.
The electronic device may display, based on the hair data set corresponding to any one of the sub-regions, a first hair model corresponding to each of the hair data in the hair data set by the three-dimensional model processing software. When the hair deletion parameters corresponding to the subareas are acquired, deleting the second quantity of hair data for each first quantity of hair data of the hair data group corresponding to the subareas according to a deletion mode indicated by the hair deletion parameters (namely deleting the second quantity of hair data from the first quantity of hair data), and obtaining a hair data group corresponding to the subareas after the deletion processing.
Optionally, any one of the hair data corresponds to a serial number. For each hair data in the hair data set corresponding to any one of the sub-areas, each first number of hair data may be divided into a second group according to the serial number of each hair data, such that each second group comprises a first number of hair data having a serial number in succession. In this case, the second number of pieces of hair data may be deleted from the first number of pieces of hair data for each second group, and the embodiment of the present application does not limit the deletion method, and may preferentially delete pieces of hair data having a smaller or larger serial number, or may adopt a method of interval deletion, random deletion, or the like.
By deleting the hair data in the hair data group corresponding to any one of the sub-areas, the distribution of the number of hairs in the different areas can be more finely adjusted, and the hair expression effect can be improved. Since a part of the hair data is deleted, storage resources, transmission resources, rendering resources, and the like required for the hair data can be reduced. By optimizing the data volume of the hair data, the occupation of the storage space can be reduced, the transmission efficiency and the rendering efficiency can be improved, and the performance and the frame rate in the scenes such as virtual live broadcast and the like can be improved.
For example, the sequence numbers of the hair data in the hair data group corresponding to any one of the subregions are 0 to 99, and the hair data of the sequence numbers 0 to 99 are divided into 10 second groups in accordance with the deletion method of 9 pieces of hair data from 10 pieces of hair data. The first and second groups include hair data of sequence numbers 0 to 9, from which hair data of 0 to 8 can be deleted. The second group includes hair data of sequence numbers 10 to 19 from which hair data of 10 to 18 can be deleted, and so on.
Referring to fig. 8, fig. 8 is a schematic diagram of a method for deleting hair data according to an embodiment of the present application. Fig. 8 shows a first hair model corresponding to the remaining hair data after the deletion process is performed on the hair data corresponding to the first hair model of red color in fig. 4.
In one possible implementation, the deleted processed hair data set corresponding to any one of the sub-regions comprises hair data of the second hair model.
In this embodiment, the hair data set corresponding to any one of the sub-regions includes hair data of each of the first hair models on the sub-region. When the deletion processing is performed on the hair data set corresponding to the sub-region, a part of the first hair model may be deleted, and at this time, the remaining first hair model may be referred to as a second hair model. Thus, the deleted hair data set corresponding to the sub-region comprises hair data of the second hair model.
In this case, step 2052 is followed by steps 2053 to 2054.
Step 2053, for any one of the second hair models, texture information for the target vertices that match any one of the second hair models is determined.
In the embodiment of the application, the electronic device may obtain the matching relationship between the vertex in the hair growth area and the first hair model on the hair growth area. Since the second hair model is the first hair model remaining after deleting a part of the first hair model, for any one of the second hair models, a target vertex that matches the second hair model can be determined from among the respective vertices on the hair growth area based on a matching relationship between the vertices in the hair growth area and the first hair model on the hair growth area. Wherein the target vertex matching the second hair model is the vertex of the hair growth area closest to the reference point of the second hair model.
Optionally, step 2053 includes: extracting a reference point of any one second hair model from the hair data corresponding to any one second hair model; a target vertex is determined from the vertices on the hair-growth area that matches the reference point of any one of the second hair models.
For any one of the second hair models, the hair data corresponding to the second hair model includes data for a plurality of points. The reference point of the second hair model may be extracted from the plurality of points based on the data of the plurality of points. Illustratively, a reference sequence number is determined from the sequence numbers of the plurality of points, and the point corresponding to the reference sequence number is determined as the reference point of the second hair model. Optionally, the reference point of the second hair model is a point characterizing the root of the second hair model.
The electronic device may obtain a matching relationship between vertices in the hair-growth area and reference points of the first hair model on the hair-growth area. Since the second hair model is the first hair model remaining after deleting a part of the first hair model, a target vertex that matches a reference point of any one of the second hair models can be determined from among the respective vertices on the hair growth area based on a matching relationship between the vertices in the hair growth area and the reference point of the first hair model on the hair growth area. Wherein the target vertex matching the reference point of the second hair model is the vertex of the hair growth area closest to the reference point of the second hair model.
After the hair data in the hair data group corresponding to any one of the subregions is deleted, the remaining first hair model is called a second hair model, and the texture information of the target vertex matched with any one of the second hair models is transferred to the second hair model, so that the texture of the hair can be controlled in regions, and the appearance effect of the hair texture in different regions can be improved. And the hair data is deleted first and then the texture information is transferred, so that the number of hair models needing to transfer the texture information can be reduced, and the transfer efficiency is improved.
Step 2054, transmitting texture information of the target vertex to any one of the second hair models, and obtaining texture information of any one of the second hair models.
In modeling the three-dimensional model, for any vertex in the hair-growth area, texture information for that vertex may be set, which is used to characterize the material. The material is not limited in this embodiment, and is exemplified by chemical fiber (a fiber with textile properties) or protein fiber (an artificial protein) or animal hair (such as hair of animals like yak hair and horsehair).
Alternatively, for any vertex on the hair-growth area, texture information for any vertex may be obtained when the electronic device receives an operation to draw a material for any vertex. The operation of drawing the material for the vertex is similar to the operation of drawing the color for the vertex, so the determination manner of the texture information of the vertex can be seen from the description of the color information of the vertex, and the description is omitted herein. The texture information of the vertex may be RGB values of the vertex or UV texture values of the vertex.
The electronic device may determine texture information of the target vertex and transmit the texture information of the target vertex to a second hair model that matches the target vertex, resulting in texture information of the second hair model. Wherein the texture information of the second hair model characterizes the second hair model as having the same material as the target vertex.
Optionally, step 2054 includes: transmitting the texture information of the target vertex to the reference point of any one of the second hair models to obtain the texture information of the reference point of any one of the second hair models; and transmitting the texture information of the reference point of any one second hair model to each point in the hair data corresponding to any one second hair model to obtain the texture information of any one second hair model.
In the embodiment of the present application, since the vertex in the hair growth area has been matched with the reference point of the first hair model on the hair growth area, and the second hair model is the first hair model remaining after deleting part of the first hair model. Accordingly, texture information of a target vertex can be transferred to a reference point of a second hair model that matches the target vertex based on a matching relationship between the vertex in the hair growth area and the reference point of the first hair model on the hair growth area, resulting in texture information of the reference point of the second hair model. Texture information for a reference point of the second hair model may characterize that the reference point of the second hair model has the same material as a target vertex that matches the reference point.
Then, texture information of the reference point of any one of the second hair models is transferred to each point in the hair data corresponding to the second hair model, and the texture information of the second hair model is obtained. Wherein texture information of a second hair model characterizes that the second hair model has the same material as the reference point of the second hair model.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating transmission of texture information according to an embodiment of the present application. Wherein (1) in fig. 9 characterizes the transfer of texture information for each vertex in the hair-growth area to a reference point of the second hair model that matches each vertex. After the texture information of each vertex in the hair growth area is transferred to the reference point of the second hair model matched to each vertex, the texture information of each reference point is represented in fig. 9 (2). Fig. 9 (3) represents texture information of each vertex in the hair-growth area. That is, for (3) in fig. 9, the texture information is transferred to each vertex in accordance with (1) in fig. 9, and (2) in fig. 9 is obtained. As is clear from fig. 9 (2) and fig. 9 (3), the position of the texture information of each reference point on the coordinate axis substantially coincides with the position of the texture information of each vertex on the coordinate axis.
It will be appreciated that, in steps 2053 to 2054, the hair data in the hair data set corresponding to any one of the sub-regions is deleted, and then the texture information of the target vertex matched with any one of the second hair models (i.e., the first hair model remaining after the deletion process) is transferred to the second hair model. In application, the texture information of the target vertex matching the first hair model on any one of the subregions may be directly transferred to the first hair model, or the texture information of the target vertex matching the first hair model on any one of the subregions may be transferred to the first hair model, and then the hair data in the hair data set corresponding to any one of the subregions may be deleted. Since the deletion process and the transfer process of texture information have been described above, their descriptions are omitted here.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the three-dimensional model of a virtual object and the like referred to in this application are all acquired with sufficient authorization.
The method comprises the steps of receiving drawing operations for all vertexes on a hair growth area, obtaining color information of all vertexes, dividing the hair growth area into subareas based on the color information of all vertexes, dividing hair data corresponding to a first hair model on the subareas into a group, obtaining a hair data group corresponding to the subareas, and adjusting the hair data group to realize adjustment of hairs corresponding to the hair data group. Because the vertexes on the hair growth area do not have the shielding phenomenon, and the hairs on the hair growth area are easy to have the shielding phenomenon, the speed for dividing the hair growth area by drawing the vertexes on the hair growth area is higher than the speed for grouping the hairs by manually selecting the hairs, so that the hairs of the virtual objects can be quickly grouped, the grouping speed is increased, and the processing efficiency is improved.
The foregoing illustrates the hair treatment method of embodiments of the present application from the perspective of method steps, and is described more fully below in connection with fig. 10. Fig. 10 is a flowchart of a hair treatment method provided in an embodiment of the present application.
In the embodiments of the present application, on the one hand, the hair growth area may be acquired, and the operation of drawing the color for each vertex on the hair growth area may be received. On the other hand, a plurality of first hair models may be acquired and reference points of the respective first hair models may be extracted. For any one of the first hair models, color information of a target vertex matching a reference point of the first hair model is transferred to the reference point of the first hair model, and color information of the reference point of the first hair model is transferred to each point of the first hair model. The implementation manner of these steps may be seen in the description of fig. 7, and will not be described herein.
Then, the hair data corresponding to the first hair model of the same color information is divided into a group, the hair data in the group is deleted, and the first hair model corresponding to the remaining hair data is regarded as the second hair model. The implementation manner of "grouping hair data corresponding to the first hair model of the same color information into a group" may be described in step 2043, and the implementation manner of deleting hair data may be described in steps 2051 to 2052, which are not described herein.
Thereafter, texture information of each vertex on the hair-growing area is acquired. For any one of the second hair models, texture information of a target vertex matching a reference point of the second hair model is transferred to the reference point of the second hair model. Texture information for a reference point of the second hair model is transferred to points of the second hair model. The implementation manner of the foregoing may be described in steps 2053 to 2054, which is not described herein.
Referring to fig. 11, fig. 11 is an effect diagram of a hair treatment result provided in the embodiment of the present application. Wherein (1-1) in fig. 11 shows the first hair model of different colors, and (1-2) in fig. 11 shows the hair effect after the first hair model is processed based on (1-1) in fig. 11, which includes but is not limited to the deletion process, the texture information transfer process. Likewise, (2-1) in fig. 11 shows the first hair model of a different color, and (2-2) in fig. 11 shows the hair effect after the first hair model is treated based on (2-1) in fig. 11. Fig. 11 (3-1) shows the first hair model of different colors, and fig. 11 (3-2) shows the hair effect after the first hair model is treated based on fig. 11 (3-1). As can be seen from fig. 11, the hair processing method provided in the embodiment of the present application can improve the hair fidelity of the virtual object.
Fig. 12 is a schematic structural view of a hair treatment device according to an embodiment of the present application, and as shown in fig. 4, the device includes:
an acquisition module 1201 for acquiring a three-dimensional model of a virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models over the hair growth area;
a determining module 1202 for, for any one of the vertices on the hair-growing area, obtaining color information of any one of the vertices in response to receiving a drawing operation for any one of the vertices;
a dividing module 1203 configured to divide the hair growth area into at least one sub-area based on the color information of each vertex on the hair growth area, the color information of each vertex in any one of the sub-areas being the same;
a grouping module 1204, configured to divide, for any one of the sub-regions, hair data corresponding to each of the first hair models on the any one of the sub-regions into a group, to obtain a hair data group corresponding to the any one of the sub-regions;
and the adjusting module 1205 is configured to perform an adjusting process on the hair data set corresponding to any one of the sub-areas.
In one possible implementation, the color information of each vertex in any one sub-region is the same;
a grouping module 1204 for determining, for any one of the first hair models, a target vertex matching any one of the first hair models from among the respective vertices on the hair growth area; transmitting the color information of the target vertex to any one of the first hair models to obtain the color information of any one of the first hair models; and dividing the hair data corresponding to the first hair model with the same color information into a group to obtain a hair data group corresponding to any one subarea.
In a possible implementation manner, the grouping module 1204 is configured to extract a reference point of any one of the first hair models from the hair data corresponding to any one of the first hair models; a target vertex is determined from the vertices on the hair-growth area that matches the reference point of any one of the first hair models.
In a possible implementation manner, the grouping module 1204 is configured to transmit the color information of the target vertex to the reference point of any one of the first hair models, so as to obtain the color information of the reference point of any one of the first hair models; and transmitting the color information of the reference point of any one of the first hair models to each point in the hair data corresponding to any one of the first hair models to obtain the color information of any one of the first hair models.
In one possible implementation, the adjusting module 1205 is configured to obtain a hair deletion parameter corresponding to any one of the sub-regions, where the hair deletion parameter is used to characterize deletion of a second number of hair data from the first number of hair data, and the first number is greater than the second number; and deleting the hair data in the hair data group corresponding to any one of the subareas based on the hair deletion parameters corresponding to any one of the subareas.
In one possible implementation, the deleted hair data set corresponding to any one of the sub-regions includes hair data of the second hair model;
the hair treatment device further includes:
a determining module 1202 further configured to determine, for any one of the second hair models, texture information of a target vertex that matches any one of the second hair models;
and the transmission module is used for transmitting the texture information of the target vertex to any one of the second hair models to obtain the texture information of any one of the second hair models.
In one possible implementation manner, the transmitting module is configured to transmit texture information of the target vertex to a reference point of any one of the second hair models, so as to obtain texture information of the reference point of any one of the second hair models; and transmitting the texture information of the reference point of any one second hair model to each point in the hair data corresponding to any one second hair model to obtain the texture information of any one second hair model.
The device receives drawing operation aiming at each vertex on the hair growth area to obtain color information of each vertex, divides the hair growth area into a group based on the color information of each vertex, divides hair data corresponding to a first hair model on the sub-area into a group to obtain a hair data group corresponding to the sub-area, and adjusts the hair data group to realize adjustment of hair corresponding to the hair data group. Because the vertexes on the hair growth area do not have the shielding phenomenon, and the hairs on the hair growth area are easy to have the shielding phenomenon, the speed for dividing the hair growth area by drawing the vertexes on the hair growth area is higher than the speed for grouping the hairs by manually selecting the hairs, so that the hairs of the virtual objects can be quickly grouped, the grouping speed is increased, and the processing efficiency is improved.
It should be understood that, in implementing the functions of the apparatus provided in fig. 12, only the division of the functional modules is illustrated, and in practical application, the functional modules may be allocated to different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 13 shows a block diagram of a terminal device 1300 according to an exemplary embodiment of the present application. The terminal apparatus 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen needs to display. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one computer program for execution by processor 1301 to implement the hair treatment methods provided by the method embodiments herein.
In some embodiments, the terminal device 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, and a power supply 1308.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1301, the memory 1302, and the peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication ) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1305 may be one and disposed on the front panel of the terminal apparatus 1300; in other embodiments, the display 1305 may be at least two, disposed on different surfaces of the terminal apparatus 1300 or in a folded design; in other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal apparatus 1300. Even more, the display screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal apparatus 1300. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
A power supply 1308 is used to power the various components in the terminal device 1300. The power source 1308 may be alternating current, direct current, a disposable battery, or a rechargeable battery. When the power source 1308 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal device 1300 also includes one or more sensors 1309. The one or more sensors 1309 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, optical sensor 1314, and proximity sensor 1315.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal apparatus 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal device 1300, and the gyro sensor 1312 may collect a 3D motion of the user on the terminal device 1300 in cooperation with the acceleration sensor 1311. Processor 1301 can implement the following functions based on the data collected by gyro sensor 1312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1313 may be disposed on a side frame of the terminal device 1300 and/or on a lower layer of the display screen 1305. When the pressure sensor 1313 is provided at a side frame of the terminal apparatus 1300, a grip signal of the terminal apparatus 1300 by a user may be detected, and the processor 1301 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1314 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1314. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1314.
The proximity sensor 1315, also referred to as a distance sensor, is typically provided on the front panel of the terminal device 1300. The proximity sensor 1315 is used to collect the distance between the user and the front face of the terminal device 1300. In one embodiment, when proximity sensor 1315 detects a gradual decrease in the distance between the user and the front face of terminal device 1300, processor 1301 controls display screen 1305 to switch from a bright screen state to a inactive screen state; when the proximity sensor 1315 detects that the distance between the user and the front surface of the terminal apparatus 1300 gradually increases, the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 14 is a schematic structural diagram of a server provided in the embodiments of the present application, where the server 1400 may have a relatively large difference due to different configurations or performances, and may include one or more processors 1401 and one or more memories 1402, where the one or more memories 1402 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1401 to implement the hair processing method provided in the embodiments of the methods described above, and the processor 1401 is a CPU, for example. Of course, the server 1400 may also have a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server 1400 may also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause an electronic device to implement any of the hair treatment methods described above.
Alternatively, the above-mentioned computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program or a computer program product is also provided, in which at least one computer program is stored which is loaded and executed by a processor for causing an electronic device to implement any of the above-mentioned hair treatment methods.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (11)

1. A hair treatment method, the method comprising:
obtaining a three-dimensional model of a virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models on the hair growth area;
for any one of the vertices on the hair-growth area, in response to receiving a drawing operation for the any one of the vertices, deriving color information for the any one of the vertices;
dividing the hair-growing area into at least one sub-area based on the color information of each vertex on the hair-growing area, wherein the color information of each vertex in any one sub-area is the same;
for any one subarea, dividing the hair data corresponding to each first hair model on the any one subarea into a group to obtain a hair data group corresponding to the any one subarea;
And adjusting the hair data group corresponding to any one of the subareas.
2. The method of claim 1, wherein the color information of each vertex in any one of the sub-regions is the same; the step of dividing the hair data corresponding to each first hair model on any one of the subareas into a group to obtain a hair data group corresponding to any one of the subareas, includes:
for any one of the first hair models, determining a target vertex matching the any one of the first hair models from among the respective vertices on the hair growth area;
transmitting the color information of the target vertex to any one of the first hair models to obtain the color information of any one of the first hair models;
and dividing the hair data corresponding to the first hair model with the same color information into a group to obtain a hair data group corresponding to any one subarea.
3. The method of claim 2, wherein said selecting a target vertex from among the vertices on the hair-growth area that matches the any one of the first hair models comprises:
extracting a reference point of any one of the first hair models from hair data corresponding to the any one of the first hair models;
A target vertex is determined from the respective vertices on the hair-growth area that matches the reference point of the any one of the first hair models.
4. The method of claim 2, wherein said communicating color information of said target vertex to said any one of said first hair models results in color information of said any one of said first hair models, comprising:
transmitting the color information of the target vertex to the reference point of any one of the first hair models to obtain the color information of the reference point of any one of the first hair models;
and transmitting the color information of the reference point of any one of the first hair models to each point in the hair data corresponding to any one of the first hair models to obtain the color information of any one of the first hair models.
5. The method according to any one of claims 1 to 4, wherein said adjusting the hair data set corresponding to said any one of the sub-areas comprises:
acquiring a hair deletion parameter corresponding to any one of the subareas, wherein the hair deletion parameter is used for representing that a second quantity of hair data is deleted from a first quantity of hair data, and the first quantity is larger than the second quantity;
And deleting the hair data in the hair data group corresponding to any one of the subareas based on the hair deletion parameters corresponding to any one of the subareas.
6. The method of claim 5, wherein the deleted processed hair data set for any one of the sub-regions comprises hair data of a second hair model; the deleting the hair data set corresponding to any one of the subareas based on the hair deleting parameter corresponding to any one of the subareas further comprises:
for any one of the second hair models, determining texture information of a target vertex matched with the any one of the second hair models;
and transmitting the texture information of the target vertex to any one of the second hair models to obtain the texture information of any one of the second hair models.
7. The method of claim 6, wherein said communicating texture information for the target vertex to the any one of the second hair models results in texture information for the any one of the second hair models, comprising:
transmitting the texture information of the target vertex to the reference point of any one of the second hair models to obtain the texture information of the reference point of any one of the second hair models;
And transmitting the texture information of the reference point of any one second hair model to each point in the hair data corresponding to any one second hair model to obtain the texture information of any one second hair model.
8. A hair treatment device, the device comprising:
an acquisition module for acquiring a three-dimensional model of a virtual object, the three-dimensional model comprising a hair growth area and a plurality of first hair models on the hair growth area;
a determining module for, for any one of the vertices on the hair-growing area, obtaining color information of the any one of the vertices in response to receiving a drawing operation for the any one of the vertices;
a dividing module for dividing the hair-growing area into at least one sub-area based on the color information of each vertex on the hair-growing area, the color information of each vertex in any one sub-area being the same;
the grouping module is used for grouping the hair data corresponding to each first hair model on any one subarea into a group for any one subarea, so as to obtain a hair data group corresponding to any one subarea;
and the adjusting module is used for adjusting the hair data set corresponding to any one of the subareas.
9. An electronic device comprising a processor and a memory, wherein the memory stores at least one computer program, the at least one computer program being loaded and executed by the processor to cause the electronic device to implement the hair treatment method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that at least one computer program is stored in the computer readable storage medium, which is loaded and executed by a processor to cause an electronic device to implement the hair treatment method according to any one of claims 1 to 7.
11. A computer program product, characterized in that at least one computer program is stored in the computer program product, which is loaded and executed by a processor for causing an electronic device to implement the hair treatment method according to any one of claims 1 to 7.
CN202211153608.9A 2022-09-21 2022-09-21 Hair treatment method, device, apparatus and readable storage medium Pending CN116503519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211153608.9A CN116503519A (en) 2022-09-21 2022-09-21 Hair treatment method, device, apparatus and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211153608.9A CN116503519A (en) 2022-09-21 2022-09-21 Hair treatment method, device, apparatus and readable storage medium

Publications (1)

Publication Number Publication Date
CN116503519A true CN116503519A (en) 2023-07-28

Family

ID=87327184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211153608.9A Pending CN116503519A (en) 2022-09-21 2022-09-21 Hair treatment method, device, apparatus and readable storage medium

Country Status (1)

Country Link
CN (1) CN116503519A (en)

Similar Documents

Publication Publication Date Title
US11393154B2 (en) Hair rendering method, device, electronic apparatus, and storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
CN108595239A (en) image processing method, device, terminal and computer readable storage medium
CN110675412B (en) Image segmentation method, training method, device and equipment of image segmentation model
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
WO2022052620A1 (en) Image generation method and electronic device
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
CN110149517B (en) Video processing method and device, electronic equipment and computer storage medium
CN112363660B (en) Method and device for determining cover image, electronic equipment and storage medium
CN113706678A (en) Method, device and equipment for acquiring virtual image and computer readable storage medium
CN110290426A (en) Method, apparatus, equipment and the storage medium of showing resource
CN111105474B (en) Font drawing method, font drawing device, computer device and computer readable storage medium
CN110517346B (en) Virtual environment interface display method and device, computer equipment and storage medium
CN109445963A (en) Content share method, device, equipment and computer readable storage medium
CN111651693B (en) Data display method, data sorting method, device, equipment and medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN112717393B (en) Virtual object display method, device, equipment and storage medium in virtual scene
CN116503519A (en) Hair treatment method, device, apparatus and readable storage medium
CN114299201A (en) Animation frame display method, device, equipment and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111797754A (en) Image detection method, device, electronic equipment and medium
CN113763486B (en) Dominant hue extraction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40090344

Country of ref document: HK