CN115841536A - Hair rendering method and device, electronic equipment and readable storage medium - Google Patents

Hair rendering method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115841536A
CN115841536A CN202211580931.4A CN202211580931A CN115841536A CN 115841536 A CN115841536 A CN 115841536A CN 202211580931 A CN202211580931 A CN 202211580931A CN 115841536 A CN115841536 A CN 115841536A
Authority
CN
China
Prior art keywords
hair
map
rendering
model
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211580931.4A
Other languages
Chinese (zh)
Inventor
邵佳仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211580931.4A priority Critical patent/CN115841536A/en
Publication of CN115841536A publication Critical patent/CN115841536A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a hair rendering method, a hair rendering device, electronic equipment and a medium, wherein the method comprises the following steps: acquiring a hair model of a virtual character, and acquiring chartlet data corresponding to the hair model; taking the hair root map, the depth map, the specific map and the transparent map as channel information to carry out image merging to obtain a target map for the hair model; and rendering the hair model by adopting the target map to obtain a corresponding hair rendering result. According to the embodiment of the disclosure, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the specific mapping chart representing the hair texture and the nuance between the hair threads and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.

Description

Hair rendering method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a hair rendering method, a hair rendering apparatus, an electronic device, and a computer-readable storage medium.
Background
Most people have hair on their heads, and it is very important to see the hair rendering. But hair rendering is also difficult at the same time. The number of hairs is huge, about 100K-150K hairs exist on one person, a large number of different hairstyles exist, and a large amount of time is needed for achieving high-quality hair rendering.
The hair rendering is usually used for a third person to say that the third person actually plays games to the world, a virtual character of a game picture in the game has a large amount of time to present a state of being opposite to a player, the hair is used as an important character head part and is positioned at a visual center, and the improvement of the rendering performance can greatly enhance the game experience of the player.
Currently, due to the performance limitations of terminal devices, especially mobile terminals, the rendering of hair in games lacks realistic orientation and high quality hair representation.
Disclosure of Invention
In view of the above, embodiments of the present disclosure are proposed in order to provide a hair rendering method and a corresponding hair rendering apparatus, an electronic device, and a computer readable storage medium that overcome or at least partially address the above-mentioned problems.
The embodiment of the disclosure discloses a hair rendering method, which comprises the following steps:
acquiring a hair model of a virtual character, and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
taking the hair root map, the depth map, the specific map and the transparent map as channel information to carry out image merging to obtain a target map for the hair model;
and rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
The embodiment of the present disclosure also discloses a hair rendering device, the device includes:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a hair model of a virtual character and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
the merging module is used for merging the image by taking the hair root mapping, the depth mapping, the specific mapping and the transparent mapping as channel information to obtain a target mapping for the hair model;
and the rendering module is used for rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
The embodiment of the present disclosure also discloses an electronic device, including: a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing a hair rendering method as described above.
An embodiment of the present disclosure also discloses a computer-readable storage medium, on which a computer program is stored, and when executed by a processor, the computer program implements the hair rendering method as described above.
The disclosed embodiments include the following advantages:
in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, a corresponding target map is obtained by merging the map data, and then the target map may be used to render the hair model. By adopting the method, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the special mapping chart representing the hair texture and the nuance between the hair and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.
Drawings
FIG. 1 is a schematic diagram of a prior art hair patch for a game preparation;
FIG. 2 is a schematic view of a prior art hair insert;
FIG. 3 is a schematic view of a prior art lighting model of a hair strand;
FIG. 4 is a flow chart illustrating steps of a method for hair rendering according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of steps of another hair rendering method provided by embodiments of the present disclosure;
FIG. 6 is a schematic diagram of hair rendering effects of an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating a hair rendering apparatus according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings
Are a part, not all of the embodiments of the disclosure. All other embodiments that can be derived by one of ordinary skill in the art 5 based on the embodiments in the disclosure are within the scope of the disclosure.
Referring to fig. 1, which is a schematic diagram of a hair Map produced by a game of the prior art, the hair Map may include a Color Map (Color Map), a transparency Map (Alpha Map), a Roughness Map (Roughness Map), a Metallic Map (Metallic Map) and a Shift Map (Shift Map),
in addition, a Normal Map (Normal Map) and an Ambient light Occlusion Map (Ambient 0Occlusion Map, AO Map) may be included.
The hair model in the game is usually made by adopting a plug-in sheet mode. Referring to fig. 2, a schematic diagram of a hair insert of the prior art is shown.
Most of hair rendering technologies in terminal games, especially mobile phone games, adopt Kajiya-Kay
The anisotropic illumination model (hereinafter, referred to as Kajiya illumination model) is an anisotropic 5-dimensional line/fiber illumination model. This model was proposed by Kajiya and Kay in 1997 and is an anisotropic model based on the Blinn-Phong illumination model. The biggest advantage of the Blinn-Phong highlight is the fast processing speed. In the Phong model, the value of V · R must be calculated, where R is the unit vector of the reflected ray and V is the unit vector of the line-of-sight direction, but in the Blinn-Phong model, N · H (normal direction · half-way)
Angle vector) instead of V · R (line of sight direction, reflection direction). Where H is the half angle vector. The 0Blinn-Phong illumination model can produce a circular or elliptical highlight, but no one
The method simulates the reflection of anisotropic highlight by hair. Therefore, the model cannot be used directly for hair rendering, and the solution proposed by Kajiya and Kay improves on this basis, increasing the anisotropy.
The anisotropic highlights can be simplified to one annular highlight and appear at the top of the head, which principle of generation,
is formed by reflecting light from a large number of filaments on the surface of an object, such as hair, a metal drawing disk, etc. Since 5 hairs are composed of numerous hair strands, each hair strand is actually a slender cylinder, and the overall highlight of the hairs is composed of the highlights of a large number of cylindrical hair strands.
Scheuermann showed a hair-line lighting model in 2004, with several important vectors as shown in FIG. 3: normal vector N-Normal perpendicular to the hair, hair oriented to world Tangent vector T-tagent, gaze direction vector V-View, incident ray vector L-Light, intermediate vector half angle vector H between gaze direction vector V and incident ray vector L.
The Kajiya lighting model can calculate a very regular circular highlight without any Shift processing, but the very regular circular highlight is difficult to see in real life because the hairs are not closely arranged on the same plane, and in order to solve the problem, marschner proposed a W-shaped highlight subjected to Shift calculation in 2003, and in order to calculate the W-shaped highlight, a Shift Map, namely the Shift Map (Shift Map) is needed.
In order to achieve a shift of the high light along the hair direction, it is sufficient to shift the secondary tangent along the normal. If the deviation is along the positive direction of the normal line, a new secondary tangent line T ' is obtained, the new normal line N ' determined by T ' and H is more inclined to the hair root direction, namely the highlight is closer to the hair root; whereas the highlight is closer to the tip. So far, the Kajiya illumination model and the Marschner model are empirical models, which are fitting models for the hair highlight, which are computationally very few.
In addition to the Kajiya lighting Model mentioned above, the current host gaming market typically uses a Hair lighting Model (Hair Shading Model) in a fantasy engine for Hair rendering, making the Hair rendering closer to reality. The diffuse reflection model used in the hair rendering scheme in the illusion engine is a model with anisotropic sub-surface Scattering (Subsurface Scattering) based on the Kajiya illumination model. The highlight rendering needs to use a multiple reflection model, which is divided into R reflection, TT transmission and TRT secondary reflection, wherein the R reflection is formed by reflecting parallel light after reaching the surface of the hair, the TT transmission is formed by transmitting light rays through the hair, the TRT secondary reflection is formed by reflecting part of light rays penetrating through the hair again after encountering the wall of the hair, and the TRT secondary reflection has a certain refraction angle.
The two hair rendering methods described above have the following technical drawbacks:
1. because the Kajiya illumination model is observed by naked eyes and is not energy-conserving, the model can only represent one layer of high light reflection of hair and cannot represent scattering, transmission and secondary reflection of real conditions.
The Kajiya illumination model uses a traditional PBR (physical-Based Rendering) Rendering mode of metallization and roughness on the map, but from the physical real Rendering aspect, the hair is non-metal, the metallization is 0, and the roughness is also a uniform value and does not need to be represented by additionally making the map.
The gradual change from the root to the tip of the hair and the texture of the hair are difficult to express through the existing mapping, a certain change and hair depth exist among all the hair, the writing feeling of the whole hair is weak, and the hair expression tends to a quadratic flattening style.
2. If a hair illumination model in the illusion engine is used for hair rendering, because the model runs on a host platform, when a large number of light sources such as parallel light, night point light, VLM (virtual local model) environment light, camera virtual light and the like need to be rendered, if each light source adopts the illumination model algorithm, huge performance overhead can be generated. If the illumination model is not adopted, the hair cannot show the anisotropic texture of the hair in scenes with weak or no parallel light, such as at night or indoors.
3. If the hair texture of the illusion engine of the host platform is required to be realized at the mobile terminal, the terminal performance is greatly reduced due to the complex calculated amount, the frame rate is greatly reduced due to the adoption of the scheme of the hair illumination model of the illusion engine, and the reduction is more obvious under the picture with high hair ratio.
4. The original hair rendering scheme does not consider the environment change with complex projects, and the highlight source mainly depends on parallel light, namely sunlight, and the hair presents effects in places lacking the parallel light source such as at night, indoors and the like, and the effects lack of gradation change and highlight reflection.
Based on this, the present disclosure relates to enhancing the realism of hair rendering with reference to a hair lighting model at the phantom Engine (UE) host end, using the hair lighting model proposed in Pekelis (2015) in recent years instead of the lighting models proposed in Kajiya, kay (1989) and Marschner (2003) in the early years. However, because the illumination model at the host end adopts a large amount of calculation, if the mobile end adopts the illumination model at the host end to draw the hair, the frame rate is reduced obviously. Therefore, the mapping sampling quantity, the fitting and simplifying operation scheme can be optimized, and the expression effect similar to that of the host computer can be achieved at the mobile terminal. The present disclosure relates to providing an efficient and realistic hair rendering method for various terminal games.
According to the embodiment of the disclosure, a hair model of a virtual character and mapping data corresponding to the hair model can be obtained, wherein the mapping data can comprise a hair root mapping, a depth mapping, a specific mapping and a transparent mapping, the mapping data is used as channel information of a synthetic image and combined to obtain a corresponding target mapping, and then the target mapping can be adopted to render the hair model. By adopting the method, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the special mapping chart representing the hair texture and the nuance between the hair and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.
Referring to fig. 4, a flowchart illustrating steps of a hair rendering method provided by an embodiment of the present disclosure is shown, which may specifically include the following steps:
step 401, obtaining a hair model of a virtual character, and obtaining map data corresponding to the hair model.
The mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping.
The hair rendering method provided by the embodiment of the present disclosure may be applied to a terminal device, for example, but not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a game machine, a smart speaker, a smart watch, and the like. Preferably, the hair rendering method provided by the embodiment of the present disclosure may be applied to a mobile terminal.
The terminal device can be provided with a game application program, and a game interface of the game application program can display one or more virtual characters.
In the embodiment of the disclosure, the hair rendering is performed on the virtual character in the game, the hair model of the virtual character can be obtained, and the chartlet data corresponding to the hair model can be obtained.
In the present embodiment, the acquired map data may include a root map representing a state of change from a root to a tip, a depth map representing a depth of hair, a peculiar map representing a texture of hair hairline and a nuance between hairlines, and a transparency map representing transparency of hair.
And 402, merging the images by taking the hair root map, the depth map, the specific map and the transparent map as channel information to obtain a target map for the hair model.
In the embodiment of the present disclosure, the root map, the depth map, the specific map, and the transparent map may be used as channel information of the image channel to perform image merging, so as to obtain a target map for the hair model.
And step 403, rendering the hair model by using the target map to obtain a corresponding hair rendering result.
In the embodiment of the present disclosure, after the target map is obtained, the target map may be adopted to render the hair model.
In summary, in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, and is merged to obtain a corresponding target map, and then the target map may be used to render the hair model. By adopting the method, the problems of weak writing feeling of the existing hair rendering can be solved by selecting the hair root mapping chart for expressing the change state from the hair root to the hair tip, the depth mapping chart for expressing the hair depth, the special mapping chart for expressing the hair hairline texture and the nuance between the hair and the transparent mapping chart for expressing the hair transparency for carrying out the hair rendering.
Referring to fig. 5, a flowchart illustrating steps of another hair rendering method provided in the embodiment of the present disclosure is shown, which may specifically include the following steps:
step 501, obtaining a hair model of a virtual character, and obtaining map data corresponding to the hair model.
The mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping.
The mobile terminal in charge of executing each step in a hair rendering method in the embodiments of the present disclosure may be a mobile terminal, and of course, the method may also be executed by other terminal devices.
In the prior art, a hair map comprises a color map, a transparent map, a tangent map, an ambient light shielding map, a roughness map, a metal degree map and the like. Wherein, the color map is used for expressing the hair color; the transparent paste picture is used for expressing the transparency of the hair; the tangent map is used for expressing the hair line direction and the hair highlight direction.
The present disclosure provides a new method for making a map, which is to select a root map representing a state of change from a root to a tip, a depth map representing a depth of hair, a specific map representing a texture of hair and a nuance between hairs, and a transparent map representing a transparency of hair for rendering hair.
In an optional embodiment of the present disclosure, the obtaining of the hair model of the virtual character in step 501, and the obtaining of the chartlet data corresponding to the hair model may specifically include the following sub-steps:
and a substep S11 of determining whether the virtual character is a master character.
And a substep S12 of obtaining the hair model of the virtual character and obtaining the chartlet data corresponding to the hair model if the virtual character is a role of a principal character.
The hair rendering method provided by the embodiment of the disclosure may be only used for a leading role in a game, and the hair rendering method provided by the embodiment of the disclosure is executed to obtain a hair model of a virtual role and obtain chartlet data corresponding to the hair model under the condition that it is determined that a virtual role needing hair rendering is the leading role. In a specific implementation, whether the role is the principal role can be judged through a PLAYERS _ SELFS principal role macro switch.
In an alternative embodiment of the present disclosure, in order to make the hair rendering method of the present disclosure perform smoothly in a low-end machine without affecting the player experience. The method for rendering the hair can be performed only when the terminal equipment belongs to the role type. For example, for a terminal device with a lower performance level, only the virtual character belonging to the type of the hero character can execute the hair rendering method provided by the embodiment of the present disclosure to perform hair rendering, so that the experience of the hero player obtained in different models can be consistent as much as possible while the terminal performance is ensured.
In an embodiment of the disclosure, the object map includes at least one transparent channel and a plurality of color channels.
Step 502, respectively using the hair root map, the depth map and the specific map as channel information of the plurality of color channels, and using the transparent map as channel information of the transparent channel to perform image merging to obtain the target map for the hair model.
The image channels of the object map may include at least one clear channel and a plurality of color channels.
In the embodiment of the present disclosure, the hair root map, the depth map, the specific map, and the transparent map may be combined into one map (target map), and the new map creation scheme may be referred to as an RDIA map scheme. The 4 channels of the hair root mapping chart, the depth mapping chart, the specific mapping chart and the transparent mapping chart can be combined into one chart to replace the Color chart (3 channels) and the Alpha chart which are manufactured originally.
The RDIA mapping scheme may represent more abundant Color variations that are hair-accurate than a single Color mapping scheme.
On an artistic resource, the RDIA mapping scheme can be made by a HairTG-HairClumpTool plug-in. By using the RDIA mapping scheme, only one set of mapping resources need to be manufactured for each type of hair piece, namely, only one mapping containing various curls, lengths and short hairs needs to be manufactured by art manufacturers, so that the method can be used for manufacturing all hair pieces, for model manufacturers, the hair pieces are placed, scaled and adjusted by the existing dough piece, and the manufacturing mode and the manufacturing process of the method can reduce 80% of art manufacturing time and reduce 90% of mapping resource packet amount.
Simultaneously, this scheme presents the artistic effect control with the parameter form for fine arts producers, lets fine arts producers can more audio-visually call out the effect of wanting, reduces because of observing the unsatisfactory time of revising the picture repeatedly of effect, promotes 60% preparation efficiency.
In an alternative embodiment of the present disclosure, the mapping data used in the present disclosure for hair rendering does not include traditional PBR mapping (including roughness mapping, metallization mapping, and ambient light shading mapping).
In terms of physical level, the hair is a non-metal object, namely, the metal degree is 0, so that the metal degree map can be directly deleted. The roughness can be adjusted by combining parameters with the specific mapping, and the roughness mapping does not need to be made independently. The ambient light mask map may also be baked into the vertex color to reduce map sampling.
Furthermore, if the RDIA mapping scheme is selected, since the differential mapping is used, its performance function is similar to that of the offset mapping, so that the offset mapping can be deleted. For the tangent map, a switch for the tangent map may be provided to replace the normal map of the original scheme, or both.
The amount of samples and the amount of packets are reduced by removing some unnecessary maps, so that the performance consumption of the terminal device can be reduced.
And 503, rendering the hair model by using the target map to obtain a corresponding hair rendering result.
In the embodiment of the present disclosure, the hair model may be rendered with reference to a hair illumination model of the illusion engine. But since the hair illumination model in the virtual engine does not compute IBL (Image Based Reflection Lighting), introducing ambient light causes the hair to be changed by ambient light changes.
In an optional embodiment of the present disclosure, the step 503 of rendering the hair model by using the target map to obtain a corresponding hair rendering result may specifically include the following sub-steps:
the substep S21, determining the incidence direction and the illumination parameters of the hair model environment light;
and S22, rendering the hair model by using the target map, generating anisotropic highlight by using the environment light, and performing highlight rendering on the hair model based on the anisotropic highlight to obtain a corresponding hair rendering result.
In the embodiment of the disclosure, the incident direction and the illumination parameter of the hair model environment light may be determined, anisotropic highlight may be generated by using the environment light, and highlight rendering may be performed on the hair model based on the anisotropic highlight.
In an optional embodiment of the present disclosure, the determining the incident direction and the illumination parameter of the hair model environment light in the sub-step S11 may specifically include the following sub-steps:
determining a target vector perpendicular to any plane through a world tangent vector and a camera vector, and taking the direction of the target vector as the incident direction of the ambient light; acquiring color parameters of ambient light, and calculating and determining illumination parameters of diffuse reflection light and illumination parameters of highlight; and calculating by adopting the illumination parameter of the diffuse reflection light, the illumination parameter of the highlight and the color parameter of the environment light to obtain the illumination parameter of the environment light.
In an optional embodiment of the present disclosure, the calculating and determining the illumination parameter of the diffuse reflection light and the illumination parameter of the highlight may specifically include the following sub-steps:
and adopting a Kajiya illumination model to calculate and determine the illumination parameters of the diffuse reflection light, and determining the illumination parameters generated by R reflection as the illumination parameters of the highlight.
In order to make hair changed by ambient light, a vector L (target vector) perpendicular to any plane, namely a vector opposite to the normal is obtained through a world tangent vector N and a camera vector V, and the L is taken as the incident direction of the ambient light.
And taking the L as the incident direction of the ambient light, and calculating an illumination parameter AmbientDiffuse of diffuse reflected light of the ambient light and an illumination parameter HairShadingSpecular of high light. The illumination parameter AmbientDiffuse for calculating the reflected light can be a Kajiya illumination model algorithm which is the same as that for calculating the Diffuse reflection Diffuse by adopting parallel light, only R reflection is calculated due to the terminal performance of IBL highlight, and finally the reflection is multiplied by the color parameter AmbientLightColor of the ambient light to obtain the illumination parameter AmbientLight of the ambient light.
The calculation may be as follows:
L=normalize(V-N*dot(V,N));
AmbientDiffuse=KajiyaDiffuse;
HairShadingSpecular=R;
AmbientLight=AmbientLightColor*
(AmbientDiffuse+HairShadingSpecular);
in an optional embodiment of the present disclosure, the step 503 of rendering the hair model by using the target map to obtain a corresponding hair rendering result may specifically include the following sub-steps:
and a substep S31, determining the environment type of the virtual character.
And S32, rendering the hair model by adopting the target map, and performing highlight rendering on the hair model by utilizing a virtual light source corresponding to the environment type to obtain a corresponding hair rendering result.
In the embodiment of the disclosure, for different environment types in a game, corresponding virtual light sources can be introduced to perform highlight rendering of hairs, so as to improve the expression effect of the hairs in different environments in the game.
The environment types may include an indoor type, a cave type, an outdoor type and a night type, and the environment types in the game may be set according to actual needs, and the embodiment of the present disclosure is not particularly limited.
For the indoor type, the existing solutions are based on VLM (Volumetric Lighting Method) for Lighting, while VLM is not high-light, in which case the hair appears without stereoscopic impression, thus requiring the introduction of a virtual light source, which is a camera virtual light, inside the room can be set to light the whole virtual character following the camera ray. The light intensity setting of the indoor camera virtual light may be as follows:
IndoorIntensity=max(0.1,VLMColor);
wherein, indoorsity is the light intensity of the virtual light of the indoor camera, and VLMColor is the color of VLM, and is limited below 0.1 to avoid over-brightness.
For the outdoor type, the outdoor camera virtual light may act only on the virtual character dark portion, only appropriately brighten the character dark portion and express the highlight texture of the dark portion. The light intensity setting for the outdoor camera virtual light may be as follows:
OutdoorIntensity=1.0-dot(N,L);
wherein, outdoorIntensity is the light intensity of the virtual light of the outdoor camera, a Lambert illumination model is adopted here, the bright part of the character is obtained by the dot multiplication of a normal N and an incident parallel light L, and the bright part is subtracted by 1.0, namely the dark part.
For indoor, outdoor, and night types, the intensity and color of the virtual light source may vary following the intensity and color variations of the VLM.
Mask=lerp(0.0,1.0,VLMIntensity);
Wherein Mask is the indoor/outdoor range, VLMINentry is the intensity of VLM, which is indoor or night when VLMINentry is 1, and is outdoor when VLMINentry is 0.
And finally, judging the colors and the intensities of the virtual lights of the cameras indoors and outdoors by Mask:
VirtualLightColor=lerp(IndoorIntensity,OutdoorIntensity,Mask)。
the calculation of the illumination parameters of the virtual light of the camera adopts the IBL result calculated in the above steps to reduce the repeated calculation, and the specific calculation may be as follows:
NoV=saturate(dot(N,V));
VirtualDiffuse=NoV*BaseColor;
VirtualSpecular=HairShadingSpecular*NoV;
VirtualLight=(VirtualDiffuse+VirtualSpecular)*VirtualLightColor。
v is a camera vector, N is a world normal vector, baseCoor is hair inherent color, virtualDiffuse is camera light diffuse reflection, and Lambert illumination model calculation is also adopted to convert an L vector in the Lambert illumination model dot (N, L) into a camera vector V to obtain NoV, namely the direction of light from the camera. VirtualSpecular is the highlight of camera light, hairShadingSpecular is the highlight of any direction of a vector L vertical to any surface, only an R term is calculated, and NoV is also the range mask of the highlight of the camera virtual light IBL ambient light. And finally, adding the VirtualDiffuse and the VirtualSpecular, and multiplying the sum by the lamp light intensity and the color to obtain the final virtual light of the VirtualLight camera.
In an alternative embodiment of the present disclosure, dynamic lights such as a glow stick and a searchlight may also be provided. The dynamic lighting can also be made into a low-cost adaptation of the hair illumination model based on the illusion engine, and the illumination parameters of the dynamic lighting can be calculated in the same way as the virtual lighting of the camera.
Referring to fig. 6, a schematic diagram of a hair rendering effect according to an embodiment of the present disclosure is shown, where a first left graph is a rendering effect in a parallel light environment, a first right graph is a rendering effect in a night + starlight environment, a second left graph is a rendering effect in a night environment, a second right graph is a rendering result in a lava cave, a third left graph is a rendering effect in a sea-side environment at night, and a third right graph is a rendering result in a warehouse environment.
In summary, in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, and is merged to obtain a corresponding target map, and then the target map may be used to render the hair model. By adopting the method, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the special mapping chart representing the hair texture and the nuance between the hair and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.
The new hair material obtained by the rendering method provided by the disclosure fills up the defect of lack of realistic representation to hair in the hand game market, enables the hair material rendering representation of the hand game to be closer to the rendering representation of the host, and optimizes the immersion experience of players.
Because the hair adopts the sheet-shaped model, the hair has obvious inserting feeling. The method and the device have the advantages that through a new chartlet making scheme, gradual change from the root to the tip of the hair, the texture and the hair depth of the hair hairline are expressed, and the problem that the overall writing sense of the hair is weak is solved. And the bag body and the sampling amount are reduced by combining the mapping channels and deleting some unnecessary mappings.
The embodiment of the disclosure performs adaptation of the mobile terminal on the basis of the illusion engine hair illumination model, and adopts some fitting functions and pre-stored schemes, so as to reduce the amount of calculation and improve the rendering performance and the frame rate. The new scheme reserves the energy conservation of the illusion engine hair illumination model and represents scattering, transmission and secondary reflection of more real conditions.
The rendering method disclosed by the invention can adapt to complex illumination environments in the game world, such as day and night changes, indoor and outdoor, forests, oceans, rock caves and the like.
At the rendering level, the scheme tends to energy conservation, can reflect more real illumination conditions and represents the transmission of hair in the backlight and the complex illumination reflection. The hair-care product can be used for coping with complex environmental changes, and can express the texture and the stereoscopic impression of hair in complex environments such as night, indoor, cave and the like.
On the performance level, in order to obtain a better rendering effect, the scheme increases a plurality of calculations and has certain performance overhead. But the performance grading enables the game machine to be smoothly represented in a low-end machine without influencing the experience of a player. The hair rendering method provided by the disclosure can be applied to various terminal games.
It is noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the disclosed embodiments are not limited by the described order of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the disclosed embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the disclosed embodiments.
Referring to fig. 7, a structural block diagram of a hair rendering device provided in the embodiment of the present disclosure is shown, which may specifically include the following modules:
an obtaining module 701, configured to obtain a hair model of a virtual character, and obtain chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
a merging module 702, configured to merge the hair root map, the depth map, the specific map, and the transparent map as channel information to obtain a target map for the hair model;
and a rendering module 703, configured to render the hair model by using the target map to obtain a corresponding hair rendering result.
In an embodiment of the present disclosure, the object map includes at least one transparent channel and a plurality of color channels, and the merging module includes:
and the merging submodule is used for respectively taking the hair root mapping, the depth mapping and the specific mapping as channel information of the color channels, taking the transparent mapping as the channel information of the transparent channels, and merging images to obtain the target mapping for the hair model.
In an embodiment of the present disclosure, the rendering module includes:
the ambient light determining submodule is used for determining the incident direction and the illumination parameters of the ambient light of the hair model;
and the first rendering submodule is used for rendering the hair model by adopting the target map, generating anisotropic highlight by adopting the environment light, and performing highlight rendering on the hair model based on the anisotropic highlight to obtain a corresponding hair rendering result.
In an embodiment of the disclosure, the ambient light determination submodule includes:
an incident direction determining unit, configured to determine a target vector perpendicular to an arbitrary plane through a world tangent vector and a camera vector, and take a direction of the target vector as an incident direction of the ambient light;
the acquisition and calculation unit is used for acquiring color parameters of ambient light and calculating and determining illumination parameters of diffuse reflection light and illumination parameters of highlight;
and the illumination parameter determining unit is used for calculating by adopting the illumination parameter of the diffuse reflection light, the illumination parameter of the highlight and the color parameter of the environment light to obtain the illumination parameter of the environment light.
In an embodiment of the present disclosure, the obtaining and calculating unit includes:
and the determining subunit is used for calculating and determining the illumination parameters of the diffuse reflection light by adopting a Kajiya illumination model, and determining the illumination parameters generated by R reflection as the illumination parameters of the highlight.
In an embodiment of the present disclosure, the rendering module includes:
the environment type determining submodule is used for determining the environment type of the virtual role;
and the second rendering submodule is used for rendering the hair model by adopting the target map and performing highlight rendering on the hair model by utilizing the virtual light source corresponding to the environment type to obtain a corresponding hair rendering result.
In an embodiment of the present disclosure, the obtaining module includes:
the role determination submodule of the main role is used for determining whether the virtual role is the role of the main role or not;
and the obtaining submodule is used for obtaining the hair model of the virtual character and obtaining the chartlet data corresponding to the hair model if the virtual character is a role of a principal character.
In summary, in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, and is merged to obtain a corresponding target map, and then the target map may be used to render the hair model. By adopting the method, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the special mapping chart representing the hair texture and the nuance between the hair and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 8, including: a processor 801, a memory 802 and a computer program stored on the memory and capable of running on the processor, wherein the computer program when executed by the processor implements the processes of the above-mentioned hair rendering method embodiment, and can achieve the same technical effects, for example:
the processor 801, when executing the computer program stored in the memory 802, implements the following steps:
acquiring a hair model of a virtual character, and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
taking the hair root map, the depth map, the specific map and the transparent map as channel information to carry out image merging to obtain a target map for the hair model;
and rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
In an optional embodiment, the target map includes at least one transparent channel and a plurality of color channels, and the image merging the root map, the depth map, the specific map, and the transparent map as channel information to obtain the target map for the hair model includes:
and respectively taking the hair root map, the depth map and the specific map as channel information of the plurality of color channels, and taking the transparent map as the channel information of the transparent channel to carry out image combination to obtain the target map aiming at the hair model.
In an optional embodiment, the rendering the hair model by using the target map to obtain a corresponding hair rendering result includes:
determining the incidence direction and the illumination parameters of the hair model environment light;
and rendering the hair model by adopting the target map, generating anisotropic highlight by adopting the environment light, and performing highlight rendering on the hair model based on the anisotropic highlight to obtain a corresponding hair rendering result.
In an alternative embodiment, the determining the incident direction and the illumination parameter of the hair model environment light includes:
determining a target vector perpendicular to any plane through a world tangent vector and a camera vector, and taking the direction of the target vector as the incident direction of the ambient light;
acquiring color parameters of ambient light, and calculating and determining illumination parameters of diffuse reflection light and illumination parameters of highlight;
and calculating by adopting the illumination parameter of the diffuse reflection light, the illumination parameter of the highlight and the color parameter of the environment light to obtain the illumination parameter of the environment light.
In an alternative embodiment, the calculating determines the illumination parameter of the diffuse reflected light and the illumination parameter of the highlights, including:
and adopting a Kajiya illumination model to calculate and determine the illumination parameters of the diffuse reflection light, and determining the illumination parameters generated by R reflection as the illumination parameters of the highlight.
In an optional embodiment, the rendering the hair model by using the target map to obtain a corresponding hair rendering result includes:
determining the type of environment in which the virtual character is located;
and rendering the hair model by adopting the target map, and performing highlight rendering on the hair model by utilizing a virtual light source corresponding to the environment type to obtain a corresponding hair rendering result.
In an optional embodiment, the obtaining a hair model of the virtual character and obtaining the map data corresponding to the hair model includes:
determining whether the virtual character is a master character;
and if the virtual character is a role of a principal character, acquiring the hair model of the virtual character and acquiring the chartlet data corresponding to the hair model.
In summary, in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, and is merged to obtain a corresponding target map, and then the target map may be used to render the hair model. By adopting the method, the problems of weak writing feeling of the existing hair rendering can be solved by selecting the hair root mapping chart for expressing the change state from the hair root to the hair tip, the depth mapping chart for expressing the hair depth, the special mapping chart for expressing the hair hairline texture and the nuance between the hair and the transparent mapping chart for expressing the hair transparency for carrying out the hair rendering.
The embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when executed by a processor, the computer program implements the processes of the above-mentioned hair rendering method embodiment, and can achieve the same technical effects, for example:
the processor, when executing the computer program stored on the computer readable storage medium, implements the steps of:
acquiring a hair model of a virtual character, and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
taking the hair root map, the depth map, the specific map and the transparent map as channel information to carry out image merging to obtain a target map for the hair model;
and rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
In an optional embodiment, the target map includes at least one transparent channel and a plurality of color channels, and the image merging the root map, the depth map, the specific map, and the transparent map as channel information to obtain the target map for the hair model includes:
and respectively taking the hair root map, the depth map and the specific map as channel information of the plurality of color channels, and taking the transparent map as the channel information of the transparent channel to carry out image combination to obtain the target map aiming at the hair model.
In an optional embodiment, the rendering the hair model by using the target map to obtain a corresponding hair rendering result includes:
determining the incidence direction and the illumination parameters of the hair model environment light;
and rendering the hair model by adopting the target map, generating anisotropic highlight by adopting the environment light, and performing highlight rendering on the hair model based on the anisotropic highlight to obtain a corresponding hair rendering result.
In an alternative embodiment, the determining the incident direction and the illumination parameter of the hair model environment light includes:
determining a target vector perpendicular to any plane through a world tangent vector and a camera vector, and taking the direction of the target vector as the incident direction of the ambient light;
acquiring color parameters of ambient light, and calculating and determining illumination parameters of diffuse reflection light and illumination parameters of highlight;
and calculating by adopting the illumination parameter of the diffuse reflection light, the illumination parameter of the highlight and the color parameter of the environment light to obtain the illumination parameter of the environment light.
In an alternative embodiment, the calculating determines the illumination parameter of the diffuse reflected light and the illumination parameter of the highlights, including:
and adopting a Kajiya illumination model to calculate and determine the illumination parameters of the diffuse reflection light, and determining the illumination parameters generated by R reflection as the illumination parameters of the highlight.
In an optional embodiment, the rendering the hair model by using the target map to obtain a corresponding hair rendering result includes:
determining the type of environment in which the virtual character is located;
and rendering the hair model by adopting the target map, and performing highlight rendering on the hair model by utilizing a virtual light source corresponding to the environment type to obtain a corresponding hair rendering result.
In an optional embodiment, the obtaining a hair model of the virtual character and obtaining the chartlet data corresponding to the hair model include:
determining whether the virtual character is a master character;
and if the virtual character is a main character, acquiring the hair model of the virtual character, and acquiring the chartlet data corresponding to the hair model.
In summary, in the embodiment of the present disclosure, a hair model of a virtual character and map data corresponding to the hair model may be obtained, where the map data may include a hair root map, a depth map, a specific map and a transparent map, the map data is used as channel information of a composite image, and is merged to obtain a corresponding target map, and then the target map may be used to render the hair model. By adopting the method, the hair root mapping chart representing the state of change from the hair root to the hair tip, the depth mapping chart representing the hair depth, the special mapping chart representing the hair texture and the nuance between the hair and the transparent mapping chart representing the hair transparency are selected for hair rendering, so that the problem that the existing hair rendering is weak in realistic sense can be solved.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the disclosed embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the disclosed embodiments may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Embodiments of the present disclosure are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the disclosed embodiments have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the disclosure.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The present disclosure provides a hair rendering method, a hair rendering device, an electronic device, and a computer-readable storage medium, which are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present disclosure, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present disclosure; meanwhile, for a person skilled in the art, based on the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present disclosure should not be construed as a limitation to the present disclosure.

Claims (10)

1. A method of hair rendering, the method comprising:
acquiring a hair model of a virtual character, and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
taking the hair root map, the depth map, the specific map and the transparent map as channel information to carry out image merging to obtain a target map for the hair model;
and rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
2. The method according to claim 1, wherein the target map comprises at least one transparent channel and a plurality of color channels, and the image merging of the root map, the depth map, the specific map and the transparent map as channel information to obtain the target map for the hair model comprises:
and respectively taking the hair root map, the depth map and the specific map as channel information of the plurality of color channels, and taking the transparent map as the channel information of the transparent channel to carry out image combination to obtain the target map aiming at the hair model.
3. The method of claim 1, wherein the rendering the hair model using the target map to obtain a corresponding hair rendering result comprises:
determining the incidence direction and the illumination parameters of the hair model environment light;
and rendering the hair model by adopting the target map, generating anisotropic highlight by adopting the environment light, and performing highlight rendering on the hair model based on the anisotropic highlight to obtain a corresponding hair rendering result.
4. The method according to claim 3, wherein said determining the incident direction and illumination parameters of the hair model ambient light comprises:
determining a target vector perpendicular to any plane through a world tangent vector and a camera vector, and taking the direction of the target vector as the incident direction of the ambient light;
acquiring color parameters of ambient light, and calculating and determining illumination parameters of diffuse reflection light and illumination parameters of highlight;
and calculating by adopting the illumination parameter of the diffuse reflection light, the illumination parameter of the highlight and the color parameter of the environment light to obtain the illumination parameter of the environment light.
5. The method of claim 4, wherein the calculating determines the illumination parameters of the diffuse reflected light and the high light, comprising:
and adopting a Kajiya illumination model to calculate and determine the illumination parameters of the diffuse reflection light, and determining the illumination parameters generated by R reflection as the illumination parameters of the highlight.
6. The method of claim 1, wherein the rendering the hair model using the target map to obtain a corresponding hair rendering result comprises:
determining the type of environment in which the virtual character is located;
and rendering the hair model by adopting the target map, and performing highlight rendering on the hair model by utilizing a virtual light source corresponding to the environment type to obtain a corresponding hair rendering result.
7. The method of claim 1, wherein obtaining a hair model of the virtual character and obtaining mapping data corresponding to the hair model comprises:
determining whether the virtual character is a master character;
and if the virtual character is a main character, acquiring the hair model of the virtual character, and acquiring the chartlet data corresponding to the hair model.
8. A hair rendering device, characterized in that the device comprises:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a hair model of a virtual character and acquiring chartlet data corresponding to the hair model; the mapping data comprises a hair root mapping, a depth mapping, a specific mapping and a transparent mapping;
the merging module is used for merging the image by taking the hair root mapping, the depth mapping, the specific mapping and the transparent mapping as channel information to obtain a target mapping for the hair model;
and the rendering module is used for rendering the hair model by adopting the target map to obtain a corresponding hair rendering result.
9. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing a hair rendering method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a hair rendering method according to any one of claims 1 to 7.
CN202211580931.4A 2022-12-09 2022-12-09 Hair rendering method and device, electronic equipment and readable storage medium Pending CN115841536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211580931.4A CN115841536A (en) 2022-12-09 2022-12-09 Hair rendering method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211580931.4A CN115841536A (en) 2022-12-09 2022-12-09 Hair rendering method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115841536A true CN115841536A (en) 2023-03-24

Family

ID=85578389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211580931.4A Pending CN115841536A (en) 2022-12-09 2022-12-09 Hair rendering method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115841536A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258801A (en) * 2023-05-16 2023-06-13 海马云(天津)信息技术有限公司 Hair processing method and device for digital virtual object and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258801A (en) * 2023-05-16 2023-06-13 海马云(天津)信息技术有限公司 Hair processing method and device for digital virtual object and storage medium
CN116258801B (en) * 2023-05-16 2023-07-07 海马云(天津)信息技术有限公司 Hair processing method and device for digital virtual object and storage medium

Similar Documents

Publication Publication Date Title
CN111009026B (en) Object rendering method and device, storage medium and electronic device
CN110599574B (en) Game scene rendering method and device and electronic equipment
CN112116692B (en) Model rendering method, device and equipment
CN109035381B (en) Cartoon picture hair rendering method and storage medium based on UE4 platform
US10846919B2 (en) Eye image generation method and apparatus
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
CN110124318B (en) Method and device for making virtual vegetation, electronic equipment and storage medium
CN106898040A (en) Virtual resource object rendering intent and device
CN111862285A (en) Method and device for rendering figure skin, storage medium and electronic device
CN115841536A (en) Hair rendering method and device, electronic equipment and readable storage medium
CN112819941A (en) Method, device, equipment and computer-readable storage medium for rendering water surface
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
CN113409465B (en) Hair model generation method and device, storage medium and electronic equipment
CN111784817A (en) Shadow display method and device, storage medium and electronic device
CN113888398A (en) Hair rendering method and device and electronic equipment
CN113610955A (en) Object rendering method and device and shader
Häggström Real-time rendering of volumetric clouds
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
CN110136238A (en) A kind of AR drawing method of combination physical light according to model
CN112446944B (en) Method and system for simulating real environment light in AR scene
CN110136239B (en) Method for enhancing illumination and reflection reality degree of virtual reality scene
CN113470160A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117078838B (en) Object rendering method and device, storage medium and electronic equipment
CN114998505A (en) Model rendering method and device, computer equipment and storage medium
CN116524102A (en) Cartoon second-order direct illumination rendering method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination