CN113888398A - Hair rendering method and device and electronic equipment - Google Patents

Hair rendering method and device and electronic equipment Download PDF

Info

Publication number
CN113888398A
CN113888398A CN202111228020.0A CN202111228020A CN113888398A CN 113888398 A CN113888398 A CN 113888398A CN 202111228020 A CN202111228020 A CN 202111228020A CN 113888398 A CN113888398 A CN 113888398A
Authority
CN
China
Prior art keywords
information
rendering
hair
highlight
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111228020.0A
Other languages
Chinese (zh)
Other versions
CN113888398B (en
Inventor
张岩林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111228020.0A priority Critical patent/CN113888398B/en
Publication of CN113888398A publication Critical patent/CN113888398A/en
Application granted granted Critical
Publication of CN113888398B publication Critical patent/CN113888398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a hair rendering method, a hair rendering device and electronic equipment, relates to the technical field of artificial intelligence, and particularly relates to the technical field of computer vision and augmented reality. The specific implementation scheme is as follows: performing first rendering processing on hair of an avatar to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information; performing second rendering processing on the hair to obtain anisotropic highlight information of the hair; obtaining target rendering information of the hair, wherein the target rendering information comprises first rendering information, and the first rendering information comprises: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.

Description

Hair rendering method and device and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of computer vision and augmented reality technologies, and in particular, to a hair rendering method and apparatus, and an electronic device.
Background
With the increasing aesthetic level of the public, the beauty of the virtual image such as the animation image is more important, and the hair is an important concern for improving the beauty as an important component of the virtual image.
At present, the hair of the virtual image is usually rendered Based on a physical Rendering (PBR) model, in actual calculation, the influence of each light source on a pixel point can be calculated respectively, and the mathematical model is closer to the real physical effect.
Disclosure of Invention
The disclosure provides a hair rendering method and device and electronic equipment.
According to a first aspect of the present disclosure, there is provided a hair rendering method comprising:
performing first rendering processing on hair of an avatar to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information;
performing second rendering processing on the hair to obtain anisotropic highlight information of the hair;
obtaining target rendering information of the hair, wherein the target rendering information comprises first rendering information, and the first rendering information comprises: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
According to a second aspect of the present disclosure, there is provided a hair rendering apparatus comprising:
the first rendering processing module is used for performing first rendering processing on the hair of the virtual image to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information;
the second rendering processing module is used for performing second rendering processing on the hair to obtain anisotropic highlight information of the hair;
a first obtaining module, configured to obtain target rendering information of the hair, where the target rendering information includes first rendering information, and the first rendering information includes: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform any one of the methods of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any of the methods of the first aspect.
According to the technology disclosed by the invention, the problem that the hair rendering effect of the virtual image is poor is solved, and the hair rendering effect of the virtual image is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow diagram of a hair rendering method according to a first embodiment of the present disclosure;
FIG. 2 is a display schematic of anisotropic highlight rendering;
FIG. 3 is a schematic illustration of the light transmission of the hair of the avatar;
fig. 4 is a schematic structural diagram of a hair rendering apparatus according to a second embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of an example electronic device used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
As shown in fig. 1, the present disclosure provides a hair rendering method, including the steps of:
step S101: and performing first rendering processing on the hair of the virtual image to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information.
In the embodiment, the hair rendering method relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and augmented reality, and can be widely applied to a hair rendering scene of an avatar. The method may be performed by a hair rendering device of an embodiment of the present disclosure. The hair rendering device may be configured in any electronic device to execute the hair rendering method according to the embodiment of the present disclosure, and the electronic device may be a server or a terminal, which is not limited specifically herein.
The avatar may be a virtual object with hair, and the avatar may be an animated avatar, a non-animated avatar, a cartoon avatar or a non-cartoon avatar, and is not particularly limited herein.
The rendering process may refer to a process of rendering the hair of the avatar, specifically, applying light to the hair to color the hair, and in the process of coloring, generating a light effect, a shadow effect, a surface texture effect, and the like.
The first rendering process may refer to a process of applying physical basic light to the hair of the avatar to generate coloring, and the physical basic light may include at least isotropic highlight and may further include at least one of diffuse reflection light and ambient light, resulting in rendering basic information of the hair.
The hair surface effects (including color) produced by the first rendering process are a sum of illumination effects by the physical base light, each illumination effect depending on a combined effect of properties of the hair surface material (such as brightness and material color) and properties of the light source (such as color and position of light).
The hair of the virtual image can be divided into specific hair to be rendered, and the hair can be rendered as a whole by one or a plurality of units, so that the problem that rendering difficulty is high due to excessive processing units can be avoided.
The first Rendering process may be performed on the hair of the avatar by using a Rendering model, where the Rendering model may be a Physical Based Rendering (PBR) model, a higher-level Rendering model such as sub-surface scattering, or other Rendering models, and is not limited herein. The first rendering process may also be performed on the hair of the avatar using a pre-baking scheme.
In an optional implementation manner, the PBR model may be used to perform a first rendering process on the hair of the avatar, so as to obtain diffuse reflection basic information and isotropic highlight special basic information. The PBR model is adopted to perform first rendering processing on the hair of the virtual image, the influence of each light source on hair pixel points can be respectively calculated, the mathematical model is closer to the real physical effect, and the PBR model can be used on a mobile terminal through simplification and optimization and can simultaneously meet dynamic or static scenes.
Step S102: and performing second rendering processing on the hair to obtain anisotropic highlight information of the hair.
The hair surface effects (including color) produced by the second rendering process are a result of the illumination by anisotropic highlights, which is a special highlight that is calculated using the tangent of the hair.
The special highlight rendering model can be adopted to perform second rendering processing on the hair of the avatar to obtain anisotropic highlight information of the hair of the avatar, the anisotropic highlight information can comprise one or more layers of anisotropic highlight information, each layer of anisotropic highlight information can comprise an effect color of the anisotropic highlight and is represented by a hairColor, a parameter for controlling the size of an area of the anisotropic highlight is represented by a hairExponent, the effect of the anisotropic highlight is strong and weak and is represented by a hairStrength, and the offset of the anisotropic highlight on the hair is represented by a shiftDistance.
Wherein, the value of the hairColor is a value after normalization, and if the default value is (1.0, 1.0, 1.0), the effect color is white light; the value of the hairExponent is greater than 0, the larger the value of the hairExponent is, the smaller the area is, the default value is 100.0f1, and f1 can be configured; the value of hairStrength is more than 0, the range is usually between 0 and 1, and can also exceed 1, the default value is 1.0f2, and f2 can be configured; shiftDistance is based on hair line vertices, 0 represents no offset, and when multiple layers are used, the variable is adjusted according to each layer hairexposure, and the default value may be 0.2.
Since the anisotropic highlight is a highlight operation performed by using a tangent line of the hair, the highlight can be shifted along the hair, and the multilayer anisotropic highlight effect can be generated on the hair when the anisotropic highlight information includes the multilayer anisotropic highlight information.
The special highlight rendering model may be a Kajiya highlight model, or may be another highlight model, which is not specifically limited herein. The Kajiya highlight model can be used for simulating special highlights on the hair, and the realistic rendering level of the hair is improved.
In a specific implementation process, the configuration can be performed in an original configuration script of the Kajiya highlight model to configure anisotropic highlight parameters, a configuration loading and releasing interface is designed, and then second rendering processing is performed on the hair of the virtual image based on the configured Kajiya highlight model to obtain anisotropic highlight information of the hair of the virtual image.
Step S103: obtaining target rendering information of the hair, wherein the target rendering information comprises first rendering information, and the first rendering information comprises: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
In this step, the target rendering information may only include the first rendering information, and correspondingly, the obtained target rendering information of the hair is the obtained first rendering information. The target rendering information may also include at least one of second rendering information and third rendering information on the basis of the first rendering information, and correspondingly, the obtaining of the target rendering information of the hair includes obtaining the first rendering information and also obtaining at least one of the second rendering information and the third rendering information. In addition, the target rendering information may further include other rendering information, for example, rendering information (e.g., difference basis information) other than the isotropic highlight information in the rendering basis information, which is not specifically limited herein.
The first rendering information may be rendering information on which isotropic highlight information and anisotropic highlight information are superimposed, the second rendering information may be rendering information on which edge light is superimposed, and the third rendering information may be rendering information on which a head-emitting light-transmitting effect is superimposed.
The first rendering information may be obtained by superimposing the anisotropic highlight information and the isotropic highlight information based on first offset information of a tangent line of the vertex of the hair in a normal direction, and the first rendering information may also be obtained by superimposing the anisotropic highlight information and the isotropic highlight information based on offset information of a tangent line of the vertex of the hair in other directions, which is not specifically limited herein.
The first offset information includes an offset of a tangent line of a vertex of the hair line in the normal direction, and the offset of the tangent line of the vertex of the hair line in the normal direction refers to an offset of the vertex of the hair line in the extending direction along the hair line. The offset may be shiftDistance in the anisotropic highlight information, that is, an offset of a tangent line of a vertex of the hair configured in the special highlight rendering model in the normal direction, or an offset superimposed with noise, that is, noise is superimposed on the basis of the shiftDistance, so as to make a jagged feeling of the anisotropic highlight edge of the hair, and improve the sense of reality, which is not specifically limited herein.
The first offset information may include one offset, a plurality of same offsets, or a plurality of different offsets, and is used for making a layering effect of the hair highlight, and is not specifically limited herein.
The anisotropic highlight information and the isotropic highlight information may be superimposed, specifically, for each offset, an operation result obtained by a special highlight rendering model such as a Kajiya highlight model, that is, the anisotropic highlight information and the isotropic highlight information may be superimposed, each operation result may contribute to the specula basic information, and a multi-layer accumulation such as three layers is supported, which may be calculated by a formula specAdd + ═ calkajiya spec (surf, i). Wherein specAdd may be isotropic highlight information, calkajiya spec may be a Kajiya highlight model, surf may be a parameter configured in the Kajiya highlight model, and i represents the number of layers.
When the first rendering information is obtained by superimposing the anisotropic highlight information and the isotropic highlight information based on the first offset information of the tangent of the vertex of the hair in the normal direction, in a specific implementation process, the vertex of the Kajiya highlight model needs to have tangent data and a corresponding normal map, highlight can be offset along the tangent of the hair, and meanwhile, the offset of the tangent in the normal direction can be configured to make a layering effect, as shown in fig. 2, highlight 201 is the anisotropic highlight rendered on the hair of the avatar.
In addition, in the specific implementation process, the calculation can be carried out in the tangent space, the rendering is completed, the conversion is carried out to the world space, the hair can be subjected to double-sided rendering during the normal mapping processing, and the back rejection can be forcibly closed after the double-sided rendering is started. And moreover, parameters of the lens can be adjusted, and die penetration of the upper layer model and the lower layer model of the hair is avoided.
And the process for acquiring the second rendering information and the process for acquiring the third rendering information will be specifically described in the following embodiments.
In the embodiment, the rendering basic information of the hair is obtained by performing first rendering processing on the hair of the virtual image, wherein the rendering basic information comprises isotropic highlight information; performing second rendering processing on the hair to obtain anisotropic highlight information of the hair; obtaining target rendering information of the hair, wherein the target rendering information comprises first rendering information, and the first rendering information comprises: and superposing the anisotropic highlight information and the isotropic highlight information to obtain rendering information. Therefore, anisotropic highlight can be simulated on the basis of the rendering of the isotropic highlight, and the hair rendering effect of the virtual image is improved.
Optionally, the first rendering information includes: rendering information obtained by superposing the anisotropic highlight information and the isotropic highlight information on the basis of first offset information of a tangent line of the top point of the hair in the normal direction.
In this embodiment, the first rendering information is obtained by superimposing the anisotropic highlight information and the isotropic highlight information based on the first offset information of the tangent line of the vertex of the hair in the normal direction, so that the anisotropic highlight extending along with the hair can be simulated on the basis of the rendering of the isotropic highlight, and the hair rendering effect of the avatar is further improved.
Optionally, before the step S103, the method further includes:
acquiring offset noise information and second offset information of a tangent line of the top point of the hair line in the normal direction;
and superposing the second offset information and the offset noise information to obtain the first offset information.
In this embodiment, the first offset information may be offset information superimposed with offset noise information, that is, the first offset information includes an offset superimposed with noise.
The second offset information may include shiftDistance in the anisotropic highlight information, that is, an offset of a tangent line of a vertex of the hair set in the special highlight rendering model in a normal direction, where the offset may be a fixed value for each layer of the anisotropic highlight.
The noise offset information may be represented by noise such as white noise, and the noise may be the same or different for each layer of anisotropic highlight, which is not specifically limited herein.
The second offset information and the offset noise information are superposed, namely after the noise is superposed on the basis of shiftDistance, the anisotropic highlight information and the isotropic highlight information are superposed on the basis of the first offset information, so that the saw-tooth feeling of the anisotropic highlight edge of the hair can be made, and the sense of reality is improved, as shown in fig. 2.
In a specific implementation process, a specific noise map can be configured in advance for changing the offset of the tangent line of the top point of the hair line in the normal direction, making the jaggy feeling of the anisotropic highlight edge of the hair line and improving the reality feeling.
In the embodiment, the first offset information is obtained by superposing the shiftDistance noise in the anisotropic highlight information, and the anisotropic highlight information and the isotropic highlight information are superposed on the basis of the first offset information, so that the jaggy of the anisotropic highlight edge of the hair can be made, the reality can be improved, and the hair rendering effect of the virtual image can be further improved.
Optionally, the target rendering information further includes second rendering information of the hair, where the second rendering information is used to characterize edge light of the hair, and the step 103 specifically includes:
carrying out vector point multiplication on the normal direction and the visual angle direction of pixel points on the hair to obtain a point multiplication result;
and under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge, performing third rendering processing on the pixel point to obtain second rendering information.
In this embodiment, can render the marginal light on avatar's hair, obtain the second and render information, it is used for simulating the outer whole printing opacity effect of hair, can show the holistic third dimension of increase hair material.
The edge light may be referred to as right, which refers to an effect of additionally adding one light where an object is located at an edge in the current viewing angle direction. In particular, it may be determined whether a pixel point on a hair strand is at a hair edge, where an edge light may be rendered when at the hair edge.
Specifically, whether the pixel point is at the hair edge can be judged by multiplying the vector point of the view direction view and the normal direction normal of the pixel point on the hair line. When the vector V in the viewing angle direction is perpendicular to the vector N in the normal direction, the dot product dot (N, V) is close to 0, i.e., the surface corresponding to the normal direction is parallel to the viewing angle direction, where edge light can be rendered.
And under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge, performing third rendering processing on the pixel point by adopting a rendering model to obtain second rendering information. The rendering model may be a right rendering model, or may be a higher-level rendering model such as sub-surface scattering, which is not specifically limited herein.
In an optional embodiment, the PBR model may be expanded on the basis of the PBR model to simulate the overall light transmission effect of the hair, and specifically, a Rimlight rendering model may be adopted to render the edge light, and the edge light is superimposed on the self-light emission to improve the overall light transmission effect of the hair. The self-luminescence refers to the light emitted by the hair material, and the color and brightness of the emitted light are determined by the hair material.
In the embodiment, a point multiplication result is obtained by performing vector point multiplication on the normal direction and the visual angle direction of the pixel points on the hair line; and under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge, performing third rendering processing on the pixel point to obtain second rendering information. Therefore, edge light can be rendered at the edge of the hair, so that the whole light transmission effect of the hair can be improved, and the hair rendering effect of the virtual image can be further improved.
Optionally, the performing a third rendering process on the pixel point to obtain the second rendering information includes:
acquiring self-luminous information of the pixel points;
performing power operation on the first range control information to obtain marginal light range information of the pixel points;
multiplying the marginal light range information and marginal light color information to obtain first marginal light information of the pixel point;
and superposing the first edge light information and the self-luminous information to obtain the second rendering information.
In this embodiment, the self-luminescence refers to light emitted by the hair material, and the color and brightness of the emitted light are determined by the hair material, so that pre-stored self-luminescence information of the pixel points in the hair of the avatar can be obtained, and the self-luminescence information of the pixel points in the hair of the avatar can be generated based on the hair material. The self-luminous information may include color information of light, among others.
The edge light may be rendered using a RimLight rendering model, which may define the edge light color, denoted by rimLight color, the edge light range, denoted by rimLightPower, the edge light intensity, denoted by rimLightIntensity.
The default value of rimLightColor may be (255, 255, 255), i.e., white light, the default value of rimLightPower may be 1, and the default value of rimLightIntensity may be 1. In the specific implementation process, configuration can be performed in the original configuration script of the Rimlight rendering model to configure the edge light parameters, and design configuration loading and releasing interfaces, and meanwhile, two sets of configuration methods can be designed, all objects in a scene, such as an avatar, can take effect completely through global configuration, and local configuration can also be performed, namely the configured objects, such as the hair of the avatar, take effect.
The first range control information, that is, the parameter representing the edge light range, may be subjected to a power operation to obtain the edge light range information of the pixel, which may be calculated by using a formula rimLightFactor ═ 0.02+ a × pow (1.0-dot.
In an optional embodiment, a is (1.0-0.02), rimLightFactor is marginal light range information, the larger the parameter representing the marginal light range is, the smaller the whole light transmission range of the hair is, that is, rimLightPower is larger, the smaller the calculated rimLightFactor is, surf.
The obtained edge light range information is multiplied by the edge light color information to obtain first edge light information of the pixel, and the first edge light information can be calculated by a formula of emission, which is the first edge light information.
Then, the first edge light information may be superimposed with the self-luminous information, so that colors in the first edge light information may be contributed to the self-luminous.
In this embodiment, the self-luminous information of the pixel point is obtained; performing power operation on the first range control information to obtain marginal light range information of the pixel points; multiplying the marginal light range information and marginal light color information to obtain first marginal light information of the pixel point; and superposing the first edge light information and the self-luminous information to obtain the second rendering information. Therefore, edge light can be rendered on the hair edge, and self-luminescence is superposed, so that the whole light transmission effect of the hair can be improved, and the hair rendering effect of the virtual image can be further improved.
And, compare in complicated rendering implementation such as subsurface scattering, adopt Rimlight rendering model to render marginal light on the PBR basis, the operand is littleer, is fit for moving end real-time environment, can reduce a thickness card simultaneously, reduces the inclusion size.
Optionally, the target rendering information further includes third rendering information of the hair, and the step S103 specifically includes:
acquiring second edge light information of a first target area and third edge light information of a second target area, wherein the first target area and the second target area are two different areas at the edge of hair;
and based on the light transmission intensity of the hair, performing superposition processing on the second edge light information and the third edge light information to obtain third rendering information.
In this embodiment, the third rendering information may be used to further simulate the light-transmitting effect of the hair edge, so as to supplement the problem that the brightness of the local position is insufficient when the edge light simulates light transmission, and the local position may be a thinner position of the hair of the avatar.
In an alternative embodiment, since the outermost hairs of the avatar are thinnest, the natural effect is usually the highest light transmission effect, and the corresponding brightness is brighter, so that a hair light transmission effect can be additionally added at the position of the outermost hairs of the avatar, that is, a hair light transmission effect can be additionally added at the local position of the outermost hairs of the avatar to supplement the problem that the brightness of the outermost hairs of the avatar is insufficient when the edge light simulates light transmission. As shown in fig. 3, the hair outer side 301 may refer to a side opposite to a human face, when the human face faces the screen, the hair outer side refers to a side facing the screen, the hair inner side refers to a side facing away from the screen, and the hair outermost side 302 refers to a boundary position of the outer side and the inner side.
The light transmission effect of the hair edge can be further simulated by rendering two edge lights and superposing the two edge lights. Specifically, the second marginal light information of the first target area can be acquired, and is represented by scatter fresnel1, the first target area can be a local hair position to be rendered, such as an outermost hair position, the second marginal light information can be acquired in a manner similar to the first marginal light information, except that the power in the power operation can be different, and the power for rendering the second marginal light information can be set to be very large, so that the light transmission effect is strongest when just back lighting occurs.
Meanwhile, the third edge light information of the second target area may be obtained, and is denoted by scatter fresnel 22, the second target area may be a local position of the hair to be rendered, such as a position outside the hair, and the third edge light information may be obtained in a manner similar to that of the first edge light information, except that the power in the power operation may be different, such as the power is 1.
Then, the second edge light information and the third edge light information may be superimposed based on the light transmission intensity of the hair to obtain third rendering information, and the third rendering information may be calculated by using a formula of transmission — scattering fresnel1+ light intensity — scattering fresnel 22, where transmission may be the third rendering information, and light intensity is the light transmission intensity of the hair. Therefore, the two marginal lights can be superposed, the strength of the scatterFresnel1 can not be controlled, and the strength of the scatterFresnel 22 can be controlled, so that the light transmission effect of the hair edge can be further simulated, the problem that the brightness of the local position of the marginal light is insufficient when the light transmission is simulated is solved, and the natural effect that the contribution of the hair illumination with different thicknesses is different can be simulated.
In a specific implementation process, configuration may be performed in an original configuration script of the model to configure an edge optical parameter, such as LightIntensity (default value of 1), and design a configuration loading and releasing interface.
Second embodiment
As shown in fig. 4, the present disclosure provides a hair rendering apparatus 400 including:
a first rendering module 401, configured to perform a first rendering process on hair of an avatar to obtain rendering basic information of the hair, where the rendering basic information includes isotropic highlight information;
a second rendering module 402, configured to perform a second rendering process on the hair to obtain anisotropic highlight information of the hair;
a first obtaining module 403, configured to obtain target rendering information of the hair, where the target rendering information includes first rendering information, and the first rendering information includes: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
Optionally, the first rendering information includes: rendering information obtained by superposing the anisotropic highlight information and the isotropic highlight information on the basis of first offset information of a tangent line of the top point of the hair in the normal direction.
Optionally, the method further includes:
the second acquisition module is used for acquiring offset noise information and second offset information of a tangent line of the top point of the hair line in the normal direction;
and the noise superposition module is used for superposing the second offset information and the offset noise information to obtain the first offset information.
Optionally, the target rendering information further includes second rendering information of the hair, where the second rendering information is used for characterizing edge light of the hair, and the first obtaining module 403 includes:
the vector dot multiplication unit is used for performing vector dot multiplication on the normal direction and the visual angle direction of the pixel points on the hair to obtain a dot multiplication result;
and the rendering processing unit is used for performing third rendering processing on the pixel point to obtain second rendering information under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge.
Optionally, the rendering processing unit is specifically configured to:
acquiring self-luminous information of the pixel points;
performing power operation on the first range control information to obtain marginal light range information of the pixel points;
multiplying the marginal light range information and marginal light color information to obtain first marginal light information of the pixel point;
and superposing the first edge light information and the self-luminous information to obtain the second rendering information.
Optionally, the target rendering information further includes third rendering information of the hair, and the first obtaining module 403 includes:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring second edge light information of a first target area and third edge light information of a second target area, and the first target area and the second target area are two different areas at the edge of hair;
and the superposition processing unit is used for carrying out superposition processing on the second edge light information and the third edge light information based on the light transmission intensity of the hair to obtain third rendering information.
The hair rendering device 400 provided by the present disclosure can implement each process implemented by the hair rendering method embodiment, and can achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as a hair rendering method. For example, in some embodiments, the hair rendering method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the hair rendering method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the hair rendering method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A hair rendering method, comprising:
performing first rendering processing on hair of an avatar to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information;
performing second rendering processing on the hair to obtain anisotropic highlight information of the hair;
obtaining target rendering information of the hair, wherein the target rendering information comprises first rendering information, and the first rendering information comprises: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
2. The method of claim 1, wherein the first rendering information comprises: rendering information obtained by superposing the anisotropic highlight information and the isotropic highlight information on the basis of first offset information of a tangent line of the top point of the hair in the normal direction.
3. The method of claim 2, prior to obtaining the target rendering information for the hair, further comprising:
acquiring offset noise information and second offset information of a tangent line of the top point of the hair line in the normal direction;
and superposing the second offset information and the offset noise information to obtain the first offset information.
4. The method of claim 1, wherein the target rendering information further comprises second rendering information of the hair, the second rendering information being used to characterize edge light of the hair, the obtaining the target rendering information of the hair comprising:
carrying out vector point multiplication on the normal direction and the visual angle direction of pixel points on the hair to obtain a point multiplication result;
and under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge, performing third rendering processing on the pixel point to obtain second rendering information.
5. The method according to claim 4, wherein the performing a third rendering process on the pixel point to obtain the second rendering information includes:
acquiring self-luminous information of the pixel points;
performing power operation on the first range control information to obtain marginal light range information of the pixel points;
multiplying the marginal light range information and marginal light color information to obtain first marginal light information of the pixel point;
and superposing the first edge light information and the self-luminous information to obtain the second rendering information.
6. The method of claim 1, the target rendering information further comprising third rendering information of the hair, the obtaining the target rendering information of the hair comprising:
acquiring second edge light information of a first target area and third edge light information of a second target area, wherein the first target area and the second target area are two different areas at the edge of hair;
and based on the light transmission intensity of the hair, performing superposition processing on the second edge light information and the third edge light information to obtain third rendering information.
7. A hair rendering device comprising:
the first rendering processing module is used for performing first rendering processing on the hair of the virtual image to obtain rendering basic information of the hair, wherein the rendering basic information comprises isotropic highlight information;
the second rendering processing module is used for performing second rendering processing on the hair to obtain anisotropic highlight information of the hair;
a first obtaining module, configured to obtain target rendering information of the hair, where the target rendering information includes first rendering information, and the first rendering information includes: and rendering information is obtained by superposing the anisotropic highlight information and the isotropic highlight information.
8. The apparatus of claim 7, wherein the first rendering information comprises: rendering information obtained by superposing the anisotropic highlight information and the isotropic highlight information on the basis of first offset information of a tangent line of the top point of the hair in the normal direction.
9. The apparatus of claim 8, further comprising:
the second acquisition module is used for acquiring offset noise information and second offset information of a tangent line of the top point of the hair line in the normal direction;
and the noise superposition module is used for superposing the second offset information and the offset noise information to obtain the first offset information.
10. The apparatus of claim 7, wherein the target rendering information further comprises second rendering information of the hair, the second rendering information being used to characterize edge light of the hair, the first obtaining module comprising:
the vector dot multiplication unit is used for performing vector dot multiplication on the normal direction and the visual angle direction of the pixel points on the hair to obtain a dot multiplication result;
and the rendering processing unit is used for performing third rendering processing on the pixel point to obtain second rendering information under the condition that the point multiplication result represents that the pixel point is positioned at the hair edge.
11. The apparatus according to claim 10, wherein the rendering processing unit is specifically configured to:
acquiring self-luminous information of the pixel points;
performing power operation on the first range control information to obtain marginal light range information of the pixel points;
multiplying the marginal light range information and marginal light color information to obtain first marginal light information of the pixel point;
and superposing the first edge light information and the self-luminous information to obtain the second rendering information.
12. The apparatus of claim 7, wherein the target rendering information further comprises third rendering information of the hair, the first obtaining module comprising:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring second edge light information of a first target area and third edge light information of a second target area, and the first target area and the second target area are two different areas at the edge of hair;
and the superposition processing unit is used for carrying out superposition processing on the second edge light information and the third edge light information based on the light transmission intensity of the hair to obtain third rendering information.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202111228020.0A 2021-10-21 2021-10-21 Hair rendering method and device and electronic equipment Active CN113888398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111228020.0A CN113888398B (en) 2021-10-21 2021-10-21 Hair rendering method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111228020.0A CN113888398B (en) 2021-10-21 2021-10-21 Hair rendering method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113888398A true CN113888398A (en) 2022-01-04
CN113888398B CN113888398B (en) 2022-06-07

Family

ID=79004248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111228020.0A Active CN113888398B (en) 2021-10-21 2021-10-21 Hair rendering method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113888398B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023193613A1 (en) * 2022-04-08 2023-10-12 北京字跳网络技术有限公司 Highlight shading method and apparatus, and medium and electronic device
WO2024082927A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Hair rendering method and apparatus, device, storage medium and computer program product

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120212484A1 (en) * 2010-02-28 2012-08-23 Osterhout Group, Inc. System and method for display content placement using distance and location information
CN103109152A (en) * 2010-08-12 2013-05-15 光子动力学公司 High speed acquisition vision system and method for selectively viewing object features
CN105023240A (en) * 2015-07-08 2015-11-04 北京大学深圳研究生院 Dictionary-type image super-resolution system and method based on iteration projection reconstruction
CN105354872A (en) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 Rendering engine, implementation method and producing tools for 3D web game
CN106815881A (en) * 2017-04-13 2017-06-09 腾讯科技(深圳)有限公司 The color control method and device of a kind of actor model
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN107204036A (en) * 2016-03-16 2017-09-26 腾讯科技(深圳)有限公司 The method and apparatus for generating hair image
CN108335351A (en) * 2018-01-26 2018-07-27 南京大学 A kind of BRDF method of color gamut mapping of color based on orientation statistical analysis
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN109035381A (en) * 2017-06-08 2018-12-18 福建天晴数码有限公司 Cartoon hair rendering method based on UE4 platform, storage medium
CN111145330A (en) * 2019-12-31 2020-05-12 广州华多网络科技有限公司 Human body model rendering method and device, electronic equipment and storage medium
WO2020119444A1 (en) * 2018-12-13 2020-06-18 腾讯科技(深圳)有限公司 Game image rendering method and device, terminal, and storage medium
CN111369655A (en) * 2020-03-02 2020-07-03 网易(杭州)网络有限公司 Rendering method and device and terminal equipment
CN111369658A (en) * 2020-03-24 2020-07-03 北京畅游天下网络技术有限公司 Rendering method and device
CN111462269A (en) * 2020-03-31 2020-07-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and electronic equipment
CN111462293A (en) * 2020-04-02 2020-07-28 网易(杭州)网络有限公司 Special effect processing method, device and equipment for three-dimensional character model and storage medium
CN111899325A (en) * 2020-08-13 2020-11-06 网易(杭州)网络有限公司 Rendering method and device of crystal stone model, electronic equipment and storage medium
CN112154443A (en) * 2018-07-25 2020-12-29 深圳市汇顶科技股份有限公司 Optical fingerprint sensor with folded optical path
CN112200896A (en) * 2020-11-27 2021-01-08 成都完美时空网络技术有限公司 Virtual object wind animation rendering method and device, storage medium and electronic device
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112700541A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Model updating method, device, equipment and computer readable storage medium
CN112884874A (en) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for applying decals on virtual model
CN113034662A (en) * 2021-03-29 2021-06-25 网易(杭州)网络有限公司 Virtual scene rendering method and device, storage medium and electronic equipment
CN113345063A (en) * 2021-08-05 2021-09-03 南京万生华态科技有限公司 PBR three-dimensional reconstruction method, system and computer storage medium based on deep learning
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120212484A1 (en) * 2010-02-28 2012-08-23 Osterhout Group, Inc. System and method for display content placement using distance and location information
CN103109152A (en) * 2010-08-12 2013-05-15 光子动力学公司 High speed acquisition vision system and method for selectively viewing object features
CN105023240A (en) * 2015-07-08 2015-11-04 北京大学深圳研究生院 Dictionary-type image super-resolution system and method based on iteration projection reconstruction
CN105354872A (en) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 Rendering engine, implementation method and producing tools for 3D web game
CN107204036A (en) * 2016-03-16 2017-09-26 腾讯科技(深圳)有限公司 The method and apparatus for generating hair image
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN106815881A (en) * 2017-04-13 2017-06-09 腾讯科技(深圳)有限公司 The color control method and device of a kind of actor model
CN109035381A (en) * 2017-06-08 2018-12-18 福建天晴数码有限公司 Cartoon hair rendering method based on UE4 platform, storage medium
CN108335351A (en) * 2018-01-26 2018-07-27 南京大学 A kind of BRDF method of color gamut mapping of color based on orientation statistical analysis
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN112154443A (en) * 2018-07-25 2020-12-29 深圳市汇顶科技股份有限公司 Optical fingerprint sensor with folded optical path
WO2020119444A1 (en) * 2018-12-13 2020-06-18 腾讯科技(深圳)有限公司 Game image rendering method and device, terminal, and storage medium
CN111145330A (en) * 2019-12-31 2020-05-12 广州华多网络科技有限公司 Human body model rendering method and device, electronic equipment and storage medium
CN111369655A (en) * 2020-03-02 2020-07-03 网易(杭州)网络有限公司 Rendering method and device and terminal equipment
CN111369658A (en) * 2020-03-24 2020-07-03 北京畅游天下网络技术有限公司 Rendering method and device
CN111462269A (en) * 2020-03-31 2020-07-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and electronic equipment
CN111462293A (en) * 2020-04-02 2020-07-28 网易(杭州)网络有限公司 Special effect processing method, device and equipment for three-dimensional character model and storage medium
CN111899325A (en) * 2020-08-13 2020-11-06 网易(杭州)网络有限公司 Rendering method and device of crystal stone model, electronic equipment and storage medium
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112200896A (en) * 2020-11-27 2021-01-08 成都完美时空网络技术有限公司 Virtual object wind animation rendering method and device, storage medium and electronic device
CN112700541A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Model updating method, device, equipment and computer readable storage medium
CN112884874A (en) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for applying decals on virtual model
CN113034662A (en) * 2021-03-29 2021-06-25 网易(杭州)网络有限公司 Virtual scene rendering method and device, storage medium and electronic equipment
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113345063A (en) * 2021-08-05 2021-09-03 南京万生华态科技有限公司 PBR three-dimensional reconstruction method, system and computer storage medium based on deep learning

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
ZHI QIAO 等: "A GAN-based temporally stable shading model for fast animation of photorealistic hair", 《COMPUTATIONAL VISUAL MEDIA》, vol. 7, no. 1, 31 March 2021 (2021-03-31), pages 127 - 138 *
吴德道 等: "基于次表面散射的肝脏高真实感实时渲染的研究与实现", 《南昌大学学报(理科版)》, vol. 44, no. 5, 31 October 2020 (2020-10-31), pages 482 - 491 *
孙正忠: "一种针对移动端的头发建模和渲染方案", 《电子设计工程》 *
孙正忠: "一种针对移动端的头发建模和渲染方案", 《电子设计工程》, vol. 26, no. 23, 31 December 2018 (2018-12-31), pages 16 - 20 *
张诚: "基于语义信息提取的卡通非真实感渲染", 《图形图像》 *
张诚: "基于语义信息提取的卡通非真实感渲染", 《图形图像》, 31 December 2019 (2019-12-31), pages 82 - 86 *
张骈 等: "高斯核函数快速插值的头发实时仿真与渲染", 《计算机辅助设计与图形学学报》 *
张骈 等: "高斯核函数快速插值的头发实时仿真与渲染", 《计算机辅助设计与图形学学报》, vol. 29, no. 2, 28 February 2017 (2017-02-28), pages 320 - 327 *
谈杰 等: "真实感虚拟人头发动态仿真研究", 《计算机应用研究》 *
谈杰 等: "真实感虚拟人头发动态仿真研究", 《计算机应用研究》, vol. 34, no. 4, 30 April 2017 (2017-04-30), pages 1226 - 1230 *
陈雨楠 等: "基于人脸识别的图像特征提取与匹配技术研究", 《人工智能与识别技术》, no. 13, 31 December 2016 (2016-12-31), pages 160 - 161 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023193613A1 (en) * 2022-04-08 2023-10-12 北京字跳网络技术有限公司 Highlight shading method and apparatus, and medium and electronic device
WO2024082927A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Hair rendering method and apparatus, device, storage medium and computer program product

Also Published As

Publication number Publication date
CN113888398B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
WO2021129044A1 (en) Object rendering method and apparatus, and storage medium and electronic device
CN108876931B (en) Three-dimensional object color adjustment method and device, computer equipment and computer readable storage medium
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
CN113888398B (en) Hair rendering method and device and electronic equipment
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
US20070139408A1 (en) Reflective image objects
Li et al. Physically-based editing of indoor scene lighting from a single image
CN114820906B (en) Image rendering method and device, electronic equipment and storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN111462293B (en) Special effect processing method, device, equipment and storage medium for three-dimensional character model
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
WO2023066121A1 (en) Rendering of three-dimensional model
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN112053423A (en) Model rendering method and device, storage medium and computer equipment
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
JP2009508234A (en) 2D / 3D combined rendering
CN104657097A (en) Method and device for displaying wave-shaped dynamic image
CN113610955A (en) Object rendering method and device and shader
US20180005432A1 (en) Shading Using Multiple Texture Maps
WO2015052514A2 (en) Rendering composites/layers for video animations
US10754498B2 (en) Hybrid image rendering system
CN112465941B (en) Volume cloud processing method and device, electronic equipment and storage medium
CN117078838B (en) Object rendering method and device, storage medium and electronic equipment
CN116206046B (en) Rendering processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant