CN115063330A - Hair rendering method and device, electronic equipment and storage medium - Google Patents

Hair rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115063330A
CN115063330A CN202210663477.2A CN202210663477A CN115063330A CN 115063330 A CN115063330 A CN 115063330A CN 202210663477 A CN202210663477 A CN 202210663477A CN 115063330 A CN115063330 A CN 115063330A
Authority
CN
China
Prior art keywords
model
rendering
semi
hair
transparent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210663477.2A
Other languages
Chinese (zh)
Inventor
冷晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202210663477.2A priority Critical patent/CN115063330A/en
Publication of CN115063330A publication Critical patent/CN115063330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a hair rendering method, apparatus, electronic device and storage medium, wherein the method comprises: acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semitransparent hair piece models which are arranged in a stacked mode; performing opaque rendering on each semitransparent hairpin model by using a color map to obtain a first rendering result of the semitransparent hairpin model, wherein the first rendering result comprises depth information of the semitransparent hairpin model relative to the virtual camera; performing semi-transparent rendering on the semi-transparent hairpiece model by using the color mapping to obtain a second rendering result of the semi-transparent hairpiece model; obtaining a rendering image corresponding to the semitransparent hairpiece model based on the first rendering result and the second rendering result; and fusing rendering images corresponding to the plurality of semitransparent hairpiece models respectively to obtain rendering images corresponding to the hair models. According to the embodiment of the disclosure, the accuracy of the rendering sequence of the plurality of semitransparent objects can be improved.

Description

Hair rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a hair rendering method, a hair rendering apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image processing technology, image rendering technology is becoming mature, and the image rendering effect obtained by image rendering is closer to the real image. However, in the related art, in the process of image rendering, colors of multiple semitransparent models (such as hair models of character characters in a super-realistic style) in a virtual scene generally need to be mixed, so that depth values cannot be written in the process of rendering, and this results in that the multiple semitransparent models do not have a positional relationship in the virtual scene, so that a problem of rendering sequence error easily occurs, and an erroneous rendering result is further caused.
Disclosure of Invention
The embodiment of the disclosure at least provides a hair rendering method, a hair rendering device, an electronic device and a computer readable storage medium.
The embodiment of the disclosure provides a hair rendering method, which comprises the following steps:
acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point;
for each semi-transparent hair piece model, performing opaque rendering on the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model, wherein the first rendering result comprises depth information of the semi-transparent hair piece model relative to a virtual camera;
performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model;
obtaining a rendering image corresponding to the semi-transparent hairpiece model based on the first rendering result and the second rendering result;
and fusing rendering images corresponding to the plurality of semitransparent hairpiece models respectively to obtain rendering images corresponding to the hair models.
In the embodiment of the disclosure, under the condition that the hair model comprises a plurality of translucent hair piece models which are stacked, opaque rendering is performed on each translucent hair piece model, then, the translucent rendering is performed on each translucent hair piece to obtain the rendering image corresponding to each translucent hair piece model, then, the rendering images corresponding to the plurality of translucent hair piece models respectively are fused to obtain the rendering image corresponding to the hair model, so that the depth information of each translucent hair piece model relative to the virtual camera can be obtained through the opaque rendering, the condition that rendering sequence errors occur in the translucent rendering process of different translucent hair piece models can be reduced, the accuracy of the rendering sequence of the plurality of translucent objects is improved, and the rendering effect of the plurality of translucent objects is improved.
In a possible embodiment, in the case that the hair model is a hair model representing the root of the hair, the hair model further includes an opaque hairpiece model which is disposed in a stack with the plurality of translucent hairpiece models, and the opaque hairpiece model is located near the scalp.
In the embodiment of the disclosure, the hair piece model close to the scalp is an opaque hair piece model, so that the problem of rendering the scalp due to transparency information of the color map can be avoided, and the reality of hair rendering is increased.
In a possible embodiment, the semi-transparent hair piece model includes a front side close to the virtual camera and a back side far from the virtual camera, and the opaque rendering of the semi-transparent hair piece model by using the color map is performed to obtain a first rendering result of the semi-transparent hair piece model, including:
performing opaque rendering on the back surface of the semi-transparent hairpiece model by using the color map to obtain a first rendering result of the semi-transparent hairpiece model;
the semi-transparent rendering is performed on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model, and the method comprises the following steps:
and performing semi-transparent rendering on the front surface of the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model.
In the embodiment of the disclosure, the color map is used for performing opaque rendering on the back surface of the semi-transparent hairpin model and performing semi-transparent rendering on the front surface of the semi-transparent hairpin model, so that the position of the semi-transparent hairpin model can be determined while the rendering effect is not influenced, and the condition that the rendering sequence is wrong is reduced.
In a possible embodiment, the opaque rendering of the back surface of the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model includes:
and performing color rendering on the back surface of the semitransparent hairpiece model by using the color mapping to obtain the first rendering result.
In the embodiment of the disclosure, the color mapping is used for performing color rendering on the back surface of the semitransparent hairpin model, so that the first rendering result further comprises color information, and the rendering effect of the back surface of the model is enhanced. In addition, because only color rendering is carried out in the first rendering, the situation that resources are wasted due to too many unnecessary renderings can be reduced.
In a possible embodiment, the semi-transparently rendering the front surface of the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model includes:
acquiring position information and light direction information of the semitransparent hairpiece model;
and performing color rendering on the front surface of the semi-transparent hairpiece model by using the color map, and performing shadow rendering on the front surface of the semi-transparent hairpiece model based on the light direction information and the position information of the semi-transparent hairpiece model to obtain a second rendering result.
In the embodiment of the disclosure, the front surface of the semi-transparent hairpiece model is respectively subjected to color rendering and shadow rendering by utilizing the color map, so that the second rendering result comprises the color information and the light and shadow information of the semi-transparent hairpiece model, and thus, the rendering effect of hair can be more real and natural.
In a possible implementation manner, the fusing rendering images corresponding to the multiple translucent hair piece models to obtain the rendering images corresponding to the hair models includes:
and fusing rendering images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera to obtain the rendering images corresponding to the hair models.
In the embodiment of the disclosure, a plurality of rendering images are fused according to the depth information of each semitransparent hairpin model relative to the virtual camera to obtain the rendering image corresponding to the hair model, so that the situation that the sequencing of the plurality of semitransparent hairpin models is wrong can be reduced, and the image generated after rendering is closer to the real effect.
In a possible implementation manner, the fusing rendering images corresponding to the multiple semi-transparent hair piece models respectively based on the depth information of each semi-transparent hair piece model relative to the virtual camera to obtain the rendering image corresponding to the hair model includes:
determining a fusion sequence of rendered images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera;
and fusing rendering images corresponding to the plurality of semi-transparent hair piece models respectively based on the fusion sequence to obtain rendering images corresponding to the hair models.
In the embodiment of the disclosure, the fusion sequence of a plurality of rendering images is determined according to the depth information of each semitransparent hairpin model relative to the virtual camera, and then the plurality of rendering images are fused according to the fusion sequence, so that the fusion sequence of the plurality of rendering images can be determined according to the position relationship among the plurality of semitransparent hairpin models, and the images generated after rendering are closer to the real effect.
The embodiment of the present disclosure further provides a hair rendering device, including:
the model acquisition module is used for acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point;
a first rendering module, configured to perform opaque rendering on the semi-transparent hair piece model by using the color map for each semi-transparent hair piece model to obtain a first rendering result of the semi-transparent hair piece model, where the first rendering result includes depth information of the semi-transparent hair piece model relative to a virtual camera;
the second rendering module is used for performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model;
the image generation module is used for obtaining a rendering image corresponding to the semitransparent hairpiece model based on the first rendering result and the second rendering result;
and the image fusion module is used for fusing rendering images corresponding to the plurality of semitransparent hair piece models respectively to obtain rendering images corresponding to the hair models.
In a possible embodiment, in the case that the hair model is a hair model representing the root of the hair, the hair model further includes an opaque hairpiece model which is disposed in a stack with the plurality of translucent hairpiece models, and the opaque hairpiece model is located near the scalp.
In a possible embodiment, the translucent hair piece model comprises a front face close to the virtual camera and a back face far away from the virtual camera, and the first rendering module is specifically configured to:
performing opaque rendering on the back surface of the semi-transparent hairpiece model by using the color map to obtain a first rendering result of the semi-transparent hairpiece model;
the second rendering module is specifically configured to:
and performing semi-transparent rendering on the front surface of the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model.
In a possible implementation, the first rendering module is specifically configured to:
and performing color rendering on the back surface of the semitransparent hairpiece model by using the color mapping to obtain the first rendering result.
In a possible implementation, the second rendering module is specifically configured to:
acquiring position information and light direction information of the semitransparent hairpiece model;
and performing color rendering on the front surface of the semi-transparent hairpiece model by using the color map, and performing shadow rendering on the front surface of the semi-transparent hairpiece model based on the light direction information and the position information of the semi-transparent hairpiece model to obtain a second rendering result.
In a possible implementation manner, the image fusion module is specifically configured to:
and fusing rendering images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera to obtain the rendering images corresponding to the hair models.
In a possible implementation manner, the image fusion module is specifically configured to:
determining a fusion sequence of rendered images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera;
and fusing rendering images corresponding to the plurality of semi-transparent hair piece models respectively based on the fusion sequence to obtain rendering images corresponding to the hair models.
An embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the hair rendering method of any one of the above possible embodiments.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the hair rendering method in any one of the above-mentioned possible embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 illustrates a schematic diagram of a first plurality of translucent object renderings provided by embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating a rendering effect of a second plurality of translucent objects provided by embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of rendering effects of a third plurality of translucent objects provided by embodiments of the present disclosure;
FIG. 4 illustrates a flow chart of a method of hair rendering provided by an embodiment of the present disclosure;
FIG. 5 illustrates a schematic view of a hair model provided by embodiments of the present disclosure;
FIG. 6 is a schematic side view of a plurality of co-located hair piece models provided by embodiments of the present disclosure;
fig. 7 is a schematic diagram illustrating a result of opaque rendering of a semitransparent hairpin model according to an embodiment of the present disclosure;
FIG. 8 is a diagram illustrating a result of semi-transparent rendering of a semi-transparent hair piece model according to an embodiment of the present disclosure;
FIG. 9 illustrates a flow chart of another hair rendering method provided by embodiments of the present disclosure;
FIG. 10 illustrates a flowchart of a method for semi-transparently rendering a front surface of a semi-transparent hairpiece model according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram illustrating a hair rendering device provided by an embodiment of the present disclosure;
fig. 12 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Since the hair model of the character with super-realistic style (hereinafter referred to as digital person) includes a plurality of semi-transparent models, colors of the plurality of semi-transparent models need to be mixed in the process of rendering the plurality of semi-transparent models, and depth values cannot be written. However, if the depth value is not written, the position relationship between the plurality of semi-transparent models cannot be determined, so that the problem of rendering sequence error is easily caused, and an erroneous rendering result is caused.
In the related art, a case that rendering effects are erroneous due to an erroneous rendering order for a plurality of translucent objects will be described in detail below.
Referring to fig. 1, the schematic diagram includes a translucent model 1, a translucent model 2, and a translucent model 3. The translucent model 1 has the translucent model 2 and the translucent model 3 placed therein, that is, under normal conditions, the translucent model 1 should be in a position in front of the translucent model 2 and the translucent model 3 regardless of the angle of view of the models.
However, in the related art, the positional relationship among the translucent models 1, 2, and 3 is not determined in the process of rendering the plurality of translucent models, and therefore, when the shooting angle of the virtual camera is adjusted, the translucent model 2 is positioned in front of the translucent model 1 (the screen effect shown in fig. 2), or the translucent model 3 is positioned in front of the translucent model 1 (the screen effect shown in fig. 3) occurs. However, the spatial positions of the semitransparent model 2 and the semitransparent model 3 are not changed, but the rendering sequence is wrong, so that the finally presented picture effect is wrong.
In view of the foregoing problems, an embodiment of the present disclosure provides a hair rendering method, including: acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point; for each semi-transparent hair piece model, performing opaque rendering on the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model, wherein the first rendering result comprises depth information of the semi-transparent hair piece model relative to a virtual camera; performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model; obtaining a rendering image corresponding to the semi-transparent hairpiece model based on the first rendering result and the second rendering result; and fusing rendering images corresponding to the plurality of semitransparent hairpiece models respectively to obtain rendering images corresponding to the hair models.
In the embodiment of the disclosure, under the condition that the hair model comprises a plurality of translucent hair piece models which are stacked, opaque rendering is performed on each translucent hair piece model, then, the translucent rendering is performed on each translucent hair piece to obtain the rendering image corresponding to each translucent hair piece model, then, the rendering images corresponding to the plurality of translucent hair piece models respectively are fused to obtain the rendering image corresponding to the hair model, so that the depth information of each translucent hair piece model relative to the virtual camera can be obtained through the opaque rendering, the condition that rendering sequence errors occur in the translucent rendering process of different translucent hair piece models can be reduced, the accuracy of the rendering sequence of the plurality of translucent objects is improved, and the rendering effect of the plurality of translucent objects is improved.
An execution subject of the hair rendering method provided by the embodiment of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a handheld device, a computing device, a vehicle-mounted device, or a server or other processing devices. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform and the like.
In some possible implementations, the hair rendering method may be implemented by a processor invoking computer readable instructions stored in a memory.
Furthermore, the present disclosure is intended to provide a rendering method, and thus the present disclosure is not limited to rendering software used in a specific implementation. The method disclosed by the disclosure can also be applied to different scenes, such as off-line rendering of digital people and hair, real-time rendering of digital people and hair on a mobile device, and the like.
Referring to fig. 4, a flowchart of a hair rendering method provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101, acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point.
Alternatively, the hair model may run on a computer CPU (Central Processing Unit), GPU (Graphics Processing Unit), and memory, which contains gridded model information and chartlet texture information. Accordingly, by way of example, the translucent hair piece model and the opaque hair piece model include, but are not limited to, gridded model data and chartlet texture data, or a combination thereof. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal mesh, or a combination thereof. In the embodiment of the present disclosure, the mesh is a triangular mesh.
Illustratively, the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, and each semi-transparent hair piece model is generated through modeling according to a hair piece formed by a plurality of hairs. Specifically, a plurality of hairs can be divided into a group, each group of hairs can form a hair piece, modeling is performed on each hair piece to obtain a semi-transparent hair piece model corresponding to each hair piece, so that a model does not need to be built for each hair, and the number of models to be rendered in the rendering process is reduced. Accordingly, each of the translucent hair piece models includes a plurality of vertices, and the vertices at different positions are connected to each other to form a surface sheet, thereby forming the surface of the hair model.
Illustratively, referring to fig. 5, a schematic diagram of a hair model provided by an embodiment of the present disclosure includes a plurality of hair models, and a plurality of hair models are respectively located at positions a, B and C, and the plurality of hair models at the same position are stacked, so that a hairstyle of the whole hair can be represented, and the rendered hair has more realism.
The color map corresponding to the hair model is generated according to the semitransparent hair piece model, the color map comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color map in the transparency channel is used for representing the transparency information of the pixel point. The value range of the transparent value corresponding to the semi-transparent hair piece model can be [0, 255], or [0, 1], the transparent value corresponding to the semi-transparent hair piece model can determine the presentation effect of the transparency degree of the semi-transparent hair piece model, and the greater the transparent value corresponding to the semi-transparent hair piece model is, the more opaque the presentation effect of the semi-transparent hair piece model is.
Specifically, the plurality of color channels in the color map are an R color channel, a G color channel, and a B color channel, respectively. The value of the R color channel is a red channel value of the pixel point, the value of the G color channel is a green channel value of the pixel point, and the value of the B color channel is a blue channel value of the pixel point. The value range of the color value corresponding to the color channel may be [0, 255], or [0, 1], the color value corresponding to the color channel may determine the color rendering effect of the translucent hair piece model, and the larger the color value corresponding to the color channel is, the deeper the color rendering effect of the translucent hair piece model is.
In a possible embodiment, in the case that the hair model is a hair model representing the root of the hair, the hair model further includes an opaque hairpiece model which is disposed in a stack with the plurality of translucent hairpiece models, and the opaque hairpiece model is located near the scalp. For example, referring again to fig. 5, the innermost hairpiece model at the position a and the position B is an opaque hairpiece model. Therefore, the problem that the scalp is rendered due to the transparency information of the color map can be avoided, and the reality of hair rendering is improved.
Illustratively, referring to fig. 6, a schematic side view of multiple identical-position hair piece models provided for an embodiment of the present disclosure includes an identical-position hair piece model 1, a hair piece model 2, a hair piece model 3, and a hair piece model 4. The hairpiece model 4 is an opaque hairpiece model close to the scalp, and the hairpiece model 1, the hairpiece model 2 and the hairpiece model 3 are semitransparent hairpiece models.
It should be noted that the hair model and the color map corresponding to the hair model may be pre-made and stored in a preset storage space, and when a rendering task for the hair model needs to be executed, the hair model and the corresponding color map are read from the preset storage space.
S102, aiming at each semi-transparent hair piece model, carrying out non-transparent rendering on the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model, wherein the first rendering result comprises depth information of the semi-transparent hair piece model relative to a virtual camera.
In particular, the semi-transparent hair piece model can be rendered through a shader. The shader is used for conducting model rendering in the 3D image rendering process to obtain a rendered image. The shader comprises a vertex shader and a pixel shader, wherein the vertex shader is used for calculating the geometric relation of model vertices, and the pixel shader is used for calculating the colors of the model.
For example, referring to fig. 7, a schematic diagram of a result of performing opaque rendering on a semitransparent hairpin model provided by an embodiment of the present disclosure may be used to perform rendering on the semitransparent hairpin model according to a coordinate value of the semitransparent hairpin model in a z-axis direction in a visual coordinate system, so as to obtain depth information of the semitransparent hairpin model relative to a virtual camera. The depth information is used for representing the position information of the semitransparent hairpin model, and the smaller the depth value corresponding to the semitransparent hairpin model is, the closer the semitransparent hairpin model is to the virtual camera is.
The virtual camera is a virtual world which presents three dimensions by controlling one or more lenses. Lens systems for electronic games are aimed at optimal angle presentation action; broadly speaking, these lenses are used in three-dimensional virtual worlds that require a third person perspective. After a suitable lens is found, the orientation and rotation angle of the lens of the virtual camera can be obtained, and the related data can be used for a graphic engine renderer (graphic engine renderer) to generate a view (view).
S103, performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model.
The semi-transparent rendering refers to performing transparent processing on the model according to the intensity of the transparency information expressed on the color map, and the transparency degree of the final rendering can be determined according to the intensity of the mapping drawing. In the embodiment, positions such as hair tips and broken hairs are rendered in a semitransparent rendering mode, so that a rendered hair model is closer to the real effect of human hair.
Optionally, in the process of performing semi-transparent rendering on the semi-transparent hairpin model by using the color map, a correspondence relationship exists between the vertex of the semi-transparent hairpin model and the pixel point of the color map, and according to the correspondence relationship, in addition to determining information such as corresponding color, texture and the like for the vertex of each semi-transparent hairpin model, the transparency value corresponding to each vertex in the first model is determined by using the value of each pixel point in the color map in the transparency channel. Wherein, this transparency value is used for the position that this pixel of representation corresponds whether transparent, consequently, can be through the transparency value that each pixel corresponds respectively in the color map, and the hair that the translucent hair piece model that will be located the inlayer corresponds can be through being located the outer translucent hair piece model and play up to reach the effect of rendering as shown in figure 8.
And S104, obtaining a rendering image corresponding to the semi-transparent hairpiece model based on the first rendering result and the second rendering result.
For example, after the first rendering result and the second rendering result are obtained, the rendering image corresponding to the semi-transparent hairpin model may be obtained according to the first rendering result and the second rendering result. The rendered image may be obtained by shooting with a virtual camera according to any angle, and the rendered image includes a plurality of hairs drawn in the color map.
It should be noted that, because the first rendering adopts an opaque rendering manner and the second rendering adopts a single-layer semi-transparent rendering manner, the problem that an error occurs in the sequencing among a plurality of semi-transparent models in the rendered image obtained through the first rendering result and the second rendering result is avoided, and the accuracy of the rendering result is further improved.
And S105, fusing the rendering images respectively corresponding to the plurality of semitransparent hairpiece models to obtain the rendering images corresponding to the hair models.
After the rendering image corresponding to each semi-transparent hairpiece model in the hair model is obtained, the rendering images corresponding to the plurality of semi-transparent hairpiece models respectively need to be fused to obtain the rendering image corresponding to the hair model.
Optionally, rendering images corresponding to the plurality of semi-transparent hair models respectively may be fused based on depth information of each semi-transparent hair model relative to the virtual camera, so as to obtain rendering images corresponding to the hair models.
In this embodiment, because the depth information of the semitransparent hairpin model relative to the virtual camera is used for determining the position of the semitransparent hairpin model relative to the virtual camera, a plurality of rendering images are fused through the depth information, so that the situation that sequencing between the semitransparent hairpin models is wrong can be reduced, and the images generated after rendering are closer to the real effect.
Specifically, based on the depth information of each semi-transparent hair piece model relative to the virtual camera, determining a fusion sequence of rendered images corresponding to the semi-transparent hair piece models respectively; and fusing rendering images corresponding to the plurality of semi-transparent hair piece models respectively based on the fusion sequence to obtain rendering images corresponding to the hair models.
When a plurality of rendering images are fused, the semi-transparent hair piece model with the smaller depth value is closer to the virtual camera, namely, the semi-transparent hair piece model with the smaller depth value is positioned at the front end of the hair piece model, and the semi-transparent hair piece model with the larger depth value is farther from the virtual camera, namely, the semi-transparent hair piece model with the larger depth value is positioned at the rear end of the hair piece model.
It should be noted that the fusion sequence of the rendered images corresponding to the multiple semitransparent hairpin models may be sorted from small to large according to the depth values, or sorted from large to small according to the depth values, which is not limited specifically. For example, if the blending order is sorted from small to large according to the depth value, the blending may be started from the semi-transparent hair piece model with a smaller depth value, that is, from the outermost layer of the hair model; if the fusion order is obtained by sorting the depth values from large to small, the fusion can be started from the semi-transparent hair piece model with the larger depth value, that is, from the innermost layer of the hair model.
In the embodiment of the disclosure, under the condition that the hair model comprises a plurality of translucent hair piece models which are stacked, opaque rendering is performed on each translucent hair piece model, then, the translucent rendering is performed on each translucent hair piece to obtain the rendering image corresponding to each translucent hair piece model, then, the rendering images corresponding to the plurality of translucent hair piece models respectively are fused to obtain the rendering image corresponding to the hair model, so that the depth information of each translucent hair piece model relative to the virtual camera can be obtained through the opaque rendering, the condition that rendering sequence errors occur in the translucent rendering process of different translucent hair piece models can be reduced, the accuracy of the rendering sequence of the plurality of translucent objects is improved, and the rendering effect of the plurality of translucent objects is improved.
In a possible implementation, the translucent hair piece model includes a front side close to the virtual camera and a back side far from the virtual camera, and specifically, referring to fig. 9, a flowchart of another hair rendering method provided by the embodiment of the present disclosure includes S201 to S205:
s201, acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point.
This step is similar to step S101 in fig. 4, and is not described again here.
S202, aiming at each semi-transparent hair piece model, carrying out opaque rendering on the back surface of the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model.
In the embodiment, the back surface of the semitransparent hairpin model is rendered in an opaque rendering mode, so that the depth information of the semitransparent hairpin model relative to the virtual camera can be obtained on the basis of not influencing the final rendering effect, and the situation that errors occur in the arrangement sequence of a plurality of models is reduced.
Specifically, the color map is used to perform color rendering on the back surface of the semi-transparent hair piece model, so as to obtain the picture effect as shown in fig. 7. Therefore, the first rendering result can also comprise color information, and the model back rendering effect is enhanced. In addition, because only color rendering is carried out in the first rendering, the situation that resources are wasted due to too many unnecessary renderings can be reduced.
S203, performing semi-transparent rendering on the front surface of the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model.
In the embodiment, the color mapping is used for performing opaque rendering on the back surface of the semi-transparent hairpin model firstly, and then the front surface of the semi-transparent hairpin model is rendered in a semi-transparent rendering mode, so that the position of the semi-transparent hairpin model can be determined while the rendering effect is not influenced, and the condition that the rendering sequence is wrong is reduced.
Specifically, referring to fig. 10, a flowchart of a method for performing semi-transparent rendering on a front surface of a semi-transparent hairpin model according to an embodiment of the present disclosure includes S2031 to S2032:
s2031, obtaining the position information and the light direction information of the semitransparent hairpiece model.
The position information of the semitransparent hairpin model refers to the position information of the semitransparent hairpin model relative to the simulated light source. The light direction information refers to the direction of the simulated light source when irradiating the semitransparent hairpin model.
S2032, performing color rendering on the front surface of the semi-transparent hairpin model by using the color map, and performing shadow rendering on the front surface of the semi-transparent hairpin model based on the light direction information and the position information of the semi-transparent hairpin model to obtain a second rendering result.
The shadow rendering is to obtain the tone which is approximately and continuously changed along with the luminosity by simulating the brightness and darkness generated by the irradiation of a light source on the model and outputting the tone or the color by using gray scale tone or color, so as to achieve the sensitive contrast of different positions of the model, ensure that the model has certain stereoscopic impression and intuitively express the change of different positions of the model.
In the embodiment, the color rendering is carried out on the front surface of the semi-transparent hairpiece model, the color information of the semi-transparent hairpiece model can be obtained, the shadow rendering is carried out on the front surface of the semi-transparent hairpiece model, the shadow information of the semi-transparent hairpiece model can be obtained, and the rendering effect of hair can be more real and natural according to the color information and the shadow information of the semi-transparent hairpiece model.
And S204, obtaining a rendering image corresponding to the semi-transparent hairpiece model based on the first rendering result and the second rendering result.
This step is similar to step S104 in fig. 4, and is not described again here.
S205, fusing the rendering images respectively corresponding to the plurality of semitransparent hairpiece models to obtain the rendering images corresponding to the hair models.
This step is similar to step S105 in fig. 4, and is not described again here.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a hair rendering device corresponding to the hair rendering method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the hair rendering method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 11, which is a schematic structural diagram of a hair rendering apparatus according to an embodiment of the present disclosure, the apparatus 1100 includes:
a model obtaining module 1101, configured to obtain a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point;
a first rendering module 1102, configured to perform opaque rendering on the semi-transparent hair piece model by using the color map for each semi-transparent hair piece model to obtain a first rendering result of the semi-transparent hair piece model, where the first rendering result includes depth information of the semi-transparent hair piece model relative to a virtual camera;
a second rendering module 1103, configured to perform semi-transparent rendering on the semi-transparent hairpiece model by using the color map, so as to obtain a second rendering result of the semi-transparent hairpiece model;
an image generating module 1104, configured to obtain a rendering image corresponding to the semi-transparent hair piece model based on the first rendering result and the second rendering result;
the image fusion module 1105 is configured to fuse rendering images corresponding to the multiple translucent hair models, respectively, to obtain rendering images corresponding to the hair models.
In a possible embodiment, in the case that the hair model is a hair model representing the root of the hair, the hair model further includes an opaque hairpiece model which is disposed in a stack with the plurality of translucent hairpiece models, and the opaque hairpiece model is located near the scalp.
In a possible implementation, the semi-transparent hair piece model includes a front side close to the virtual camera and a back side far from the virtual camera, and the first rendering module 1102 is specifically configured to:
performing opaque rendering on the back surface of the semi-transparent hairpiece model by using the color map to obtain a first rendering result of the semi-transparent hairpiece model;
the second rendering module 1103 is specifically configured to:
and performing semi-transparent rendering on the front surface of the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model.
In a possible implementation, the first rendering module 1102 is specifically configured to:
and performing color rendering on the back surface of the semitransparent hairpiece model by using the color mapping to obtain the first rendering result.
In a possible implementation manner, the second rendering module 1103 is specifically configured to:
acquiring position information and light direction information of the semitransparent hairpiece model;
and performing color rendering on the front surface of the semi-transparent hairpiece model by using the color map, and performing shadow rendering on the front surface of the semi-transparent hairpiece model based on the light direction information and the position information of the semi-transparent hairpiece model to obtain a second rendering result.
In a possible implementation, the image fusion module 1105 is specifically configured to:
and fusing rendering images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera to obtain the rendering images corresponding to the hair models.
In a possible implementation, the image fusion module 1105 is specifically configured to:
determining a fusion sequence of rendered images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera;
and fusing rendering images corresponding to the plurality of semi-transparent hair piece models respectively based on the fusion sequence to obtain rendering images corresponding to the hair models.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. Referring to fig. 12, a schematic structural diagram of an electronic device 1200 provided in the embodiment of the present application includes a processor 1201, a memory 1202, and a bus 1203. The storage 1202 is used for storing execution instructions, and includes a memory 12021 and an external storage 12022; the memory 12021 is also referred to as an internal memory and temporarily stores operation data in the processor 1201 and data exchanged with the external memory 12022 such as a hard disk, and the processor 1201 exchanges data with the external memory 12022 through the memory 2021.
In this embodiment, the memory 1202 is specifically configured to store application program codes for executing the scheme of the present application, and the processor 1201 controls the execution. That is, when the electronic device 1200 operates, the processor 1201 and the memory 1202 communicate via the bus 1203, so that the processor 1201 executes the application program code stored in the memory 1202 to perform the method disclosed in any of the previous embodiments.
The Memory 1202 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 1201 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 1200. In other embodiments of the present application, the electronic device 1200 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of a hair rendering method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of a hair rendering method in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of hair rendering, comprising:
acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point;
for each semi-transparent hair piece model, performing opaque rendering on the semi-transparent hair piece model by using the color map to obtain a first rendering result of the semi-transparent hair piece model, wherein the first rendering result comprises depth information of the semi-transparent hair piece model relative to a virtual camera;
performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model;
obtaining a rendering image corresponding to the semitransparent hairpiece model based on the first rendering result and the second rendering result;
and fusing rendering images corresponding to the plurality of semitransparent hairpiece models respectively to obtain rendering images corresponding to the hair models.
2. The method of claim 1, wherein in the case where the hair model is a hair model characterizing the root of the hair, the hair model further comprises an opaque hairpiece model, the opaque hairpiece model is disposed in a stack with the plurality of translucent hairpiece models, and the opaque hairpiece model is positioned near the scalp.
3. The method of claim 1 or 2, wherein the semi-transparent hair piece model comprises a front side close to the virtual camera and a back side far from the virtual camera, and wherein the opaque rendering of the semi-transparent hair piece model by the color map is performed to obtain a first rendering result of the semi-transparent hair piece model, and comprises:
performing opaque rendering on the back surface of the semi-transparent hairpiece model by using the color map to obtain a first rendering result of the semi-transparent hairpiece model;
the semi-transparent rendering is performed on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model, and the method comprises the following steps:
and performing semi-transparent rendering on the front surface of the semi-transparent hairpiece model by using the color mapping to obtain a second rendering result of the semi-transparent hairpiece model.
4. The method of claim 3, wherein the opaque rendering of the back surface of the semi-transparent hair piece model using the color map to obtain a first rendering result of the semi-transparent hair piece model comprises:
and performing color rendering on the back surface of the semitransparent hairpiece model by using the color mapping to obtain the first rendering result.
5. The method according to claim 3 or 4, wherein the semi-transparent rendering of the front surface of the semi-transparent hair piece model by using the color map to obtain a second rendering result of the semi-transparent hair piece model comprises:
acquiring position information and light direction information of the semitransparent hairpiece model;
and performing color rendering on the front surface of the semi-transparent hairpiece model by using the color map, and performing shadow rendering on the front surface of the semi-transparent hairpiece model based on the light direction information and the position information of the semi-transparent hairpiece model to obtain a second rendering result.
6. The method according to any one of claims 1 to 5, wherein the fusing the rendered images corresponding to the plurality of translucent hair piece models to obtain the rendered images corresponding to the hair models comprises:
and fusing rendering images corresponding to the plurality of semi-transparent hair patch models respectively based on the depth information of each semi-transparent hair patch model relative to the virtual camera to obtain rendering images corresponding to the hair models.
7. The method according to claim 6, wherein the fusing rendering images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera to obtain the rendering image corresponding to the hair model comprises:
determining a fusion sequence of rendered images respectively corresponding to the plurality of semi-transparent hair patch models based on the depth information of each semi-transparent hair patch model relative to the virtual camera;
and fusing rendering images corresponding to the plurality of semi-transparent hair piece models respectively based on the fusion sequence to obtain rendering images corresponding to the hair models.
8. A hair rendering device, comprising:
the model acquisition module is used for acquiring a hair model and a color map corresponding to the hair model; the hair model comprises a plurality of semi-transparent hair piece models which are arranged in a stacked mode, each semi-transparent hair piece model is generated by modeling according to hair pieces formed by a plurality of hairs, the color chartlet comprises a plurality of color channels and transparency channels, and the value of each pixel point in the color chartlet in the transparency channels is used for representing the transparency information of the pixel point;
a first rendering module, configured to perform opaque rendering on the semi-transparent hair piece model by using the color map for each semi-transparent hair piece model to obtain a first rendering result of the semi-transparent hair piece model, where the first rendering result includes depth information of the semi-transparent hair piece model relative to a virtual camera;
the second rendering module is used for performing semi-transparent rendering on the semi-transparent hairpiece model by using the color map to obtain a second rendering result of the semi-transparent hairpiece model;
the image generation module is used for obtaining a rendering image corresponding to the semitransparent hairpiece model based on the first rendering result and the second rendering result;
and the image fusion module is used for fusing rendering images corresponding to the plurality of semitransparent hair piece models respectively to obtain rendering images corresponding to the hair models.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the hair rendering method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs a hair rendering method according to any one of claims 1 to 7.
CN202210663477.2A 2022-06-13 2022-06-13 Hair rendering method and device, electronic equipment and storage medium Pending CN115063330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210663477.2A CN115063330A (en) 2022-06-13 2022-06-13 Hair rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210663477.2A CN115063330A (en) 2022-06-13 2022-06-13 Hair rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115063330A true CN115063330A (en) 2022-09-16

Family

ID=83200695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210663477.2A Pending CN115063330A (en) 2022-06-13 2022-06-13 Hair rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115063330A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630516A (en) * 2023-06-09 2023-08-22 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630516A (en) * 2023-06-09 2023-08-22 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium
CN116630516B (en) * 2023-06-09 2024-01-30 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN112316420A (en) Model rendering method, device, equipment and storage medium
CN111369655A (en) Rendering method and device and terminal equipment
CN110689626A (en) Game model rendering method and device
WO2023098358A1 (en) Model rendering method and apparatus, computer device, and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
WO2023098344A1 (en) Graphic processing method and apparatus, computer device, and storage medium
CN109903374B (en) Eyeball simulation method and device for virtual object and storage medium
CN114375464A (en) Ray tracing dynamic cells in virtual space using bounding volume representations
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN114549719A (en) Rendering method, rendering device, computer equipment and storage medium
CN114529657A (en) Rendering image generation method and device, computer equipment and storage medium
CN115845369A (en) Cartoon style rendering method and device, electronic equipment and storage medium
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN114581592A (en) Highlight rendering method and device, computer equipment and storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN114078180A (en) Three-dimensional hair model generation method and device, electronic equipment and storage medium
CN115082615A (en) Rendering method, rendering device, computer equipment and storage medium
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN115131493A (en) Dynamic light special effect display method and device, computer equipment and storage medium
CN117911600A (en) Method and device for generating stylized hand-drawing effect, storage medium and electronic device
CN114972647A (en) Model rendering method and device, computer equipment and storage medium
CN114519760A (en) Method and device for generating map, computer equipment and storage medium
CN117671125A (en) Illumination rendering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination