CN114119848B - Model rendering method and device, computer equipment and storage medium - Google Patents

Model rendering method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114119848B
CN114119848B CN202111471457.7A CN202111471457A CN114119848B CN 114119848 B CN114119848 B CN 114119848B CN 202111471457 A CN202111471457 A CN 202111471457A CN 114119848 B CN114119848 B CN 114119848B
Authority
CN
China
Prior art keywords
target
dimensional model
map
fusion layer
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111471457.7A
Other languages
Chinese (zh)
Other versions
CN114119848A (en
Inventor
宋田骥
刘欢
陈烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111471457.7A priority Critical patent/CN114119848B/en
Publication of CN114119848A publication Critical patent/CN114119848A/en
Priority to PCT/CN2022/128075 priority patent/WO2023098358A1/en
Application granted granted Critical
Publication of CN114119848B publication Critical patent/CN114119848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a model rendering method, apparatus, computer device, and storage medium, wherein the method includes: obtaining a color map corresponding to the target three-dimensional model; generating a material capturing map of material photosensitive information of the target three-dimensional model in a preset illumination direction and a cube map of photosensitive information of different three-dimensional position areas based on the preset illumination direction; performing first fusion processing on the color map and the material capturing map to obtain a first fusion layer; performing second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer; and rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model. According to the embodiment of the disclosure, the visual effect of three-dimensional texture can be realized, so that the rendering effect of the target three-dimensional model is more real and vivid.

Description

Model rendering method and device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer graphics, in particular to a model rendering method, a device, computer equipment and a storage medium.
Background
With the development of computer graphics technology, three-dimensional models which can be displayed to users in three-dimensional scenes are becoming more and more abundant.
The style requirements of users on three-dimensional models in three-dimensional scenes are higher and higher, such as diversified styles of writing, cartoon, hand drawing and the like. In a cartoon style rendered three-dimensional scene, the vividness and realistic experience is lacking when some three-dimensional models are rendered.
Disclosure of Invention
The embodiment of the disclosure at least provides a model rendering method, a model rendering device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a model rendering method, including:
Obtaining a color map corresponding to the target three-dimensional model;
Generating a material capture map for representing material photosensitive information of the target three-dimensional model in a preset illumination direction and a cube map for representing photosensitive information of different three-dimensional position areas based on the preset illumination direction;
Performing first fusion processing on the color map and the material capturing map to obtain a first fusion layer;
Performing second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer;
and rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model.
In an alternative embodiment, the performing a first fusion process on the color map and the material capture map to obtain a first fusion layer includes:
Respectively carrying out the first fusion processing on sub-color maps corresponding to each patch of the target three-dimensional model in the color maps and the material capturing maps to obtain sub-fusion layers corresponding to each patch;
And integrating the sub-fusion layers corresponding to the patches respectively to obtain the first fusion layer.
In an optional implementation manner, the rendering processing is performed on the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, which includes:
Responding to the input photosensitive parameter information, and carrying out photosensitive processing on the second fusion layer based on the photosensitive parameter information to obtain a second fusion layer after photosensitive processing;
and rendering the target three-dimensional model based on the second fused layer after the photosensitive processing to obtain a rendered target three-dimensional model.
In an alternative embodiment, the sensitization parameter information includes at least one of metal shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflection area on the target three-dimensional model.
In an alternative embodiment, the method further comprises:
acquiring a tone scale map representing the color change characteristics of a light receiving area in the target three-dimensional model;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
Performing tone scale processing on the second fusion layer based on the tone scale information in the tone scale map to obtain a second fusion layer after the tone scale processing; the color gradation information comprises color values corresponding to the color gradations respectively, duty ratio information of a light receiving area of the color gradations in the target three-dimensional model and color fusion information of adjacent color gradations in the color gradations;
and rendering the target three-dimensional model based on the second fusion layer after the color gradation processing to obtain a rendered target three-dimensional model.
In an alternative embodiment, the method further comprises:
Obtaining a channel map representing a region to be rendered of a material in the target three-dimensional model;
After obtaining the first fusion layer, and before performing the second fusion processing on the cube map and the first fusion layer, the method further includes:
Determining a first local fusion layer corresponding to the region to be subjected to material rendering in the first fusion layer based on the channel map;
and performing a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer, including:
Performing second fusion processing on the cube map and the first local fusion layer to obtain a second fusion layer;
After the second fusion layer is obtained, before rendering the target three-dimensional model, the method further comprises:
determining a second local fusion layer corresponding to the region to be subjected to material rendering in the second fusion layer based on the channel map;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
And rendering the target three-dimensional model based on the second local fusion layer to obtain a rendered target three-dimensional model.
In an alternative embodiment, the method further comprises:
acquiring an environment map representing the shadow shape of the target three-dimensional model in the preset illumination direction;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
Performing light and shadow shape processing on the second fusion layer based on the light and shadow shape indicated by the environment map to obtain a second fusion layer after the light and shadow shape processing;
and rendering the target three-dimensional model based on the second fusion layer after the light and shadow shape processing to obtain a rendered target three-dimensional model.
In a second aspect, an embodiment of the present disclosure further provides a model rendering apparatus, including:
the acquisition module is used for acquiring the color map corresponding to the target three-dimensional model;
The generation module is used for generating a material capture map used for representing material photosensitive information of the target three-dimensional model in the preset illumination direction and a cube map of photosensitive information in different three-dimensional position areas based on the preset illumination direction;
the first processing module is used for carrying out first fusion processing on the color map and the material capturing map to obtain a first fusion map layer;
the second processing module is used for carrying out second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer;
and the third processing module is used for rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
According to the model rendering method provided by the embodiment of the disclosure, the color mapping for rendering the target three-dimensional model and the material capturing mapping representing the material photosensitive information are subjected to the first fusion processing, so that the target three-dimensional model can realize basic material texture (such as metal texture); and then, carrying out second fusion processing on the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer, so that the stereoscopic visual display effect of the target three-dimensional model can be realized, and further, the stereoscopic visual effect of texture of materials can be realized when the target three-dimensional model is rendered, so that the rendering effect of the target three-dimensional model is more real and vivid.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a model rendering method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram showing the effect of a material capture map of a metal ball according to an embodiment of the disclosure;
FIG. 3 illustrates an effect schematic of a cube map of a metal sphere provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram showing the effect of a tone scale map according to an embodiment of the present disclosure;
FIG. 5 is a schematic view showing the effect of the metal ball after rendering is completed according to the embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a model rendering apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
With the development of computer graphics technology, three-dimensional models which can be displayed to users in three-dimensional scenes are becoming more and more abundant. The style requirements of users on three-dimensional models in three-dimensional scenes are higher and higher, such as diversified styles of writing, cartoon, hand drawing and the like. In a cartoon style rendered three-dimensional scene, the vividness and realistic experience is lacking when some three-dimensional models are rendered.
Based on this, the present disclosure provides a model rendering method, which performs a first fusion process on a color map for rendering a target three-dimensional model and a material capture map representing material sensitization information, so that the target three-dimensional model can achieve basic material texture (e.g., metal texture); and then, carrying out second fusion processing on the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer, so that the stereoscopic visual display effect of the target three-dimensional model can be realized, and further, when the target three-dimensional model is rendered, the stereoscopic visual effect of texture of materials can be realized, and the rendering effect of the target three-dimensional model is more real.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a model rendering method disclosed in the embodiments of the present disclosure will be described in detail, and an execution body of the model rendering method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability.
The model rendering method provided by the embodiment of the present disclosure is described below by taking an execution body as a server as an example.
Referring to fig. 1, a flowchart of a model rendering method according to an embodiment of the disclosure is shown, where the method includes S101 to S105, where:
s101: and obtaining a color map corresponding to the target three-dimensional model.
In the embodiment of the present disclosure, the target three-dimensional model may be a three-dimensional model to be rendered, such as a virtual character model, a virtual object model, or the like, applied in the target game scene in the virtual space. The target three-dimensional model may be drawn using unit animation rendering and production software based on a personal computer (Personal Computer, PC) system, such as 3D Studio Max (abbreviated as 3DS Max or 3D Max) or Maya or other three-dimensional model production software.
After the three-dimensional model of the target is produced, the produced three-dimensional model of the target can be unfolded to obtain a two-dimensional image under a UV coordinate system (in the UV coordinate, U represents a transverse coordinate axis, and V represents a longitudinal coordinate axis). Each UV coordinate value in the two-dimensional image obtained after expansion may correspond to each point on the surface of the target three-dimensional model.
Color Map (Color Map) may contain Color information of the virtual character itself corresponding to the target three-dimensional model. Specifically, the color information contained in the color map corresponds to the UV coordinate values, and the color map may contain color information under each UV coordinate value in the two-dimensional image obtained after the expansion of the target three-dimensional model. The color map may be rendered by image processing software, such as the rendering software Photoshop or other rendering software.
S102: based on a preset illumination direction, generating a material capture map for representing material photosensitive information of the target three-dimensional model in the preset illumination direction and a cube map for representing photosensitive information of different three-dimensional position areas.
The Material Capture (Matcap) map is a two-dimensional planar map under a predetermined illumination direction. Based on the preset illumination direction, the generated material capturing map can comprise two-dimensional material photosensitive information acquired by the camera from the front of the target three-dimensional model under the condition that the pose of the target three-dimensional model is unchanged in the preset illumination direction. Specifically, the texture capture map may include information such as illumination and reflection of a surface of a target texture (e.g., metal) in the rendered target three-dimensional model. When the information contained in the material capturing map is utilized to render the target three-dimensional model, illumination information of the material used for rendering the target three-dimensional model under illumination can be displayed. As shown in fig. 2, the effect of the material capturing map of the metal ball is shown, and the light receiving area, the light reflecting area and the shadow area included in the material capturing map can be seen from fig. 2.
Cube Map (Cube Map) maps may be obtained by baking. Based on the preset illumination direction, the generated cube map can contain photosensitive information respectively acquired by the camera from six directions under the condition that the preset illumination direction is downward and the pose of the target three-dimensional model is unchanged. The cube map may include photosensitive information of different three-dimensional position areas in the target three-dimensional model in the preset illumination direction, and specifically may include information of a shadow area to be rendered, a light reflection area, and the like. The effect of a cube map of metal spheres as shown in fig. 3 is schematically represented, and the reflective areas and the shaded areas contained in the cube map can be seen in fig. 3.
The obtained color map, and the generated material capture map and cube map may be added to a game engine, and information contained in the color map, the material capture map and the cube map may be processed by the game engine to obtain rendering information for rendering the target three-dimensional model.
The steps of processing the above-described map to obtain rendering information for rendering the target three-dimensional model will be described in detail below.
S103: and performing first fusion processing on the color mapping and the material capturing mapping to obtain a first fusion layer.
In this step, the process of performing the first fusion processing on the color map and the material capturing map may be: and fusing the color information of the virtual character corresponding to the target three-dimensional model contained in the color map with the photosensitive information of the target three-dimensional model contained in the material capturing map. Here, the first fusion processing procedure is a procedure of sampling color information contained in the color map and photosensitive information of the target three-dimensional model contained in the material capturing map. The obtained first fusion layer contains first rendering information after the color information and the shadow information are fused.
In general, the three-dimensional model is formed by patches, where when performing the first fusion processing on the color map and the material capturing map, in one manner, the sub-color maps corresponding to each patch of the target three-dimensional model in the color map may be respectively subjected to the first fusion processing with the material capturing map, so as to obtain sub-fusion layers respectively corresponding to each patch. And then integrating the sub-fusion layers corresponding to the patches respectively to obtain a first fusion layer.
The sub-color map corresponding to each patch contains color information of the corresponding position of the virtual character corresponding to the patch. The sub-fusion layer corresponding to each patch may include first rendering information after fusion of color information of the patch and light and shadow information included in the material capturing map.
And then integrating the sub-fusion layers corresponding to the patches according to the positions of the patches in the target three-dimensional model respectively to obtain a first fusion layer.
S104: and carrying out second fusion processing on the cube map and the first fusion map layer to obtain a second fusion map layer.
In this step, the process of performing the first fusion processing on the cube map and the first fusion layer may be: and fusing the shadow information of different three-dimensional position areas in the target three-dimensional model contained in the cube map with the rendering information contained in the first fusion layer. Here, the second fusion processing procedure, that is, the procedure of sampling the shadow information of the different three-dimensional position areas in the target three-dimensional model included in the cube map and the rendering information included in the aforementioned first fusion layer. The obtained second fusion layer contains second rendering information which can be used for rendering the target three-dimensional model.
S105: and rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model.
When the target three-dimensional model is rendered, the second rendering information contained in the second fusion layer can be utilized to directly render, and the obtained target three-dimensional model is fused with the color information corresponding to each point of the target three-dimensional model stored in the color mapping, the plane light shadow information representing the material photosensitive information of the target three-dimensional model in the preset illumination direction stored in the material capturing mapping, and the three-dimensional light shadow information representing the photosensitive information of the different three-dimensional position areas of the target three-dimensional model in the preset illumination direction stored in the cube mapping, so that the rendered target three-dimensional model can present the three-dimensional visual effect of the target material in the preset illumination direction, and the effect schematic diagram after the metal ball rendering is completed is shown in fig. 5.
In order to make the stereoscopic effect of the target three-dimensional model more realistic, in one embodiment, the second fusion layer may be subjected to photosensitive processing based on the photosensitive parameter information in response to the input photosensitive parameter information, so as to obtain the second fusion layer after the photosensitive processing. And then rendering the target three-dimensional model based on the second fused layer after the photosensitive treatment to obtain the rendered target three-dimensional model.
Here, the photosensitive parameter information input on the operation interface of the game engine may be responded. The input photosensitive parameter information can increase the photosensitive effect of the target material (such as metal) in the target three-dimensional model, such as the shadow intensity effect, the reflection color, and the like. Thus, in still another embodiment, the input photosensitive parameter information may include at least one of metallic light shadow intensity information, metallic reflection intensity information, and environmental reflection color information corresponding to a light reflection area on the target three-dimensional model.
Therefore, in the implementation process, the intensity of the light shadow in the target three-dimensional model can be adjusted based on the metal light shadow intensity information. And carrying out photosensitive treatment on the second fusion layer by utilizing the metal shadow intensity information, wherein the metal shadow intensity information is fused in the obtained second fusion layer after the photosensitive treatment. After the target three-dimensional model is rendered by using the second fusion layer after the photosensitive treatment, the light and shadow intensity of metal on the target three-dimensional model is larger, namely the target three-dimensional model presents a visual effect that the reflection area is brighter and the shadow area is darker.
In the implementation process, the reflection intensity of the reflection area in the target three-dimensional model can be adjusted based on the metal reflection intensity information. And carrying out photosensitive treatment on the second fusion layer by utilizing the metal reflection intensity information, wherein the obtained second fusion layer subjected to the photosensitive treatment fuses the metal reflection intensity information. After the target three-dimensional model is rendered by using the second fusion layer after the photosensitive treatment, the reflection intensity of metal on the target three-dimensional model is larger, and the visual effects that the reflection area is brighter and the shadow area is darker can be displayed on the target three-dimensional model.
In the implementation process, the reflection color of the reflection area in the target three-dimensional model can also be based on the environment reflection color information corresponding to the reflection area on the target three-dimensional model. And carrying out photosensitive treatment on the second fusion layer by utilizing the metal reflection intensity information, wherein the obtained second fusion layer subjected to the photosensitive treatment fuses the metal reflection intensity information. After the target three-dimensional model is rendered by using the second fusion layer after the photosensitive treatment, the reflection color of the metal on the target three-dimensional model can be more in line with the environmental color in the target game scene.
In the implementation process, the second fusion layer can be subjected to photosensitive treatment by utilizing a plurality of photosensitive parameter information, so that the authenticity of the stereoscopic vision effect can be further improved.
In order to make the stereoscopic effect of the target three-dimensional model more realistic, in an embodiment, the color of the light-receiving area in the target three-dimensional model may be further subjected to a color gradation process, so that the light-receiving area in the target three-dimensional model has a visual effect of color change. Specifically, the method can include performing a tone scale process on the second fusion layer based on tone scale information in the tone scale map to obtain a tone scale processed second fusion layer; the color gradation information comprises color values corresponding to the color gradations respectively, duty ratio information of the light receiving area of the color gradations in the target three-dimensional model and color fusion information of adjacent color gradations in the color gradations. And then, rendering the target three-dimensional model based on the second fusion layer after the color gradation processing to obtain a rendered target three-dimensional model.
Here, the tone scale map may be a map representing the color change characteristics of the light receiving region in the target three-dimensional model, which is created by drawing software. The tone scale map may include a plurality of tone scales. The respective gradations in the gradation map may be arranged in order of changing the shades of color. An effect diagram of a tone scale map as shown in fig. 4 may include 4 tone scales, and the 4 tone scales are arranged from left to right in the order from deep to light. And the duty cycle information for each tone scale may be different throughout the tone scale map. The color values of two adjacent color levels at the critical position can be the color fusion value after the color values of the two color levels are fused.
And performing tone scale processing on the second fusion layer by using the tone scale information in the tone scale map, wherein the obtained tone scale information in the tone scale map is fused in the second fusion layer after the tone scale processing. After the target three-dimensional model is rendered by using the second fusion layer after the color gradation processing, the color of the light receiving area on the target three-dimensional model can show a gradual change effect, so that the situation that the color jumps from the shadow area to the light receiving area can be avoided, and the authenticity of the stereoscopic vision effect is increased.
In order to make the stereoscopic effect of the target three-dimensional model more real, in an embodiment, the light and shadow shapes in the target three-dimensional model can be further processed, so that the light and shadow shapes in the target three-dimensional model more conform to the light and shadow shapes in the real environment. Specifically, the second fused layer may be subjected to a light and shadow shape processing based on the light and shadow shape indicated by the environmental map, to obtain a second fused layer after the light and shadow shape processing. And then, rendering the target three-dimensional model based on the second fusion layer after the shadow shape processing to obtain the rendered target three-dimensional model.
Here, the environment map may be a map representing a light and shadow shape of the target three-dimensional model in a preset illumination direction, which is made by drawing software. The environment map includes shape information of the reflective area and shape information of the shadow area.
And performing light and shadow shape processing on the second fusion layer by utilizing the light and shadow shape information in the environment map, wherein the light and shadow shape information in the environment map is fused in the obtained second fusion layer after the light and shadow shape processing. After the target three-dimensional model is rendered by the second fusion layer after the light and shadow shape processing, the shape of the light reflecting area and the shape of the shadow area on the target three-dimensional model can be more in line with the light and shadow shape in the real environment, so that the authenticity of the stereoscopic vision effect is improved.
It is contemplated that in some cases only the target area in the target three-dimensional model may need to be material rendered, such as metal rendering of a metal paillette on a virtual character garment. Therefore, the channel map representing the region to be rendered in the target three-dimensional model can be obtained, and the region indicated by the region information to be rendered is rendered through the region information stored in the channel map.
In a specific implementation, after the first fused layer is reached, before the cube map and the first fused layer are subjected to the second fusion processing, a first local fused layer corresponding to the region to be subjected to material rendering in the first fused layer is determined based on the channel map. The obtained first local fusion layer may include region information to be rendered. After the first local fusion layer is obtained, a second fusion process may be performed on the cube map and the first local fusion layer including the region information to be rendered.
After the second local fusion layer is obtained and before rendering the target three-dimensional model, the second local fusion layer corresponding to the region to be rendered can be determined in the second fusion layer based on the channel map. And then rendering the target three-dimensional model based on the second local fusion layer to obtain a rendered target three-dimensional model.
By using the channel map, the region to be subjected to material rendering can be subjected to material rendering, so that the region to be subjected to material rendering presents an effect of special material texture.
In an embodiment of the present disclosure, various texture maps may be processed by an algorithm for material rendering to obtain rendering information for rendering a target three-dimensional model. Algorithms used herein include mainly illumination shading algorithms and light shadow algorithms. Here, the process of performing the algorithm processing on various texture maps will be described in detail by taking a material as an example.
In an implementation, a color map and a material capture map corresponding to the target three-dimensional model may be obtained.
Here, it may be assumed that the color information of the virtual character corresponding to the target three-dimensional model included in the color map is a, the two-dimensional light shadow information of the target three-dimensional model included in the material capturing map in the preset illumination direction is b, and the rendering information included in the first fusion layer is c, and then the rendering information included in the first fusion layer may use a first formula included in the illumination coloring algorithm:
obtained.
The function of the saturation (x) function may be: the clamp i to range [0,1], i.e. let i equal 1 if i is greater than 1 and let i equal 0 if x is less than 0, where i is any variable. I can be made to take a value between 0 and 1 by the saturation (i) function. That is, through the first formula, the rendering information contained in the first fusion layer can be valued between 0 and 1.
The metallic texture of the target three-dimensional model foundation can be reflected through the illumination coloring algorithm.
Next, a cube map and a channel map corresponding to the target three-dimensional model are acquired. Here, it is assumed that the light shadow information d of the target three-dimensional model included in the cube map is in a preset illumination direction for different three-dimensional position areas in the target three-dimensional model. Assume that the region information to be rendered in the target three-dimensional model included in the channel map is y. Here the region information value in the channel map may be stored in any channel. In addition, the input metallic light intensity information x may also be acquired.
Here, a second formula in the lithographic algorithm is first usedThe light and shadow information d1 after the light and shadow information in the cube map is smoothed is obtained. Wherein, smoothstep functions may be: returning a smooth difference between 0 and 1.
And then according to a third formula in the light shadow algorithmAnd obtaining the shadow information d2 after the shadow intensity fusion processing by utilizing the metal shadow intensity information x to carry out the shadow intensity fusion processing on the shadow information d1 and the rendering information c contained in the first fusion layer and utilizing the metal shadow intensity information x to carry out the shadow intensity fusion processing on the two-dimensional shadow information b of the target three-dimensional model.
Finally, according to a fourth formula in the light shadow algorithmAnd performing region selection processing on the rendering information c and the light shadow information d2 subjected to the light shadow intensity fusion processing, which are contained in the first fusion layer, by using the region information y to be subjected to material rendering, so as to obtain light shadow rendering information f subjected to the region selection processing.
In the process of rendering the target three-dimensional model by utilizing the shadow rendering information f of the region selection processing, the input metal reflection intensity information, the input environment reflection color information corresponding to the reflection region on the target three-dimensional model, the tone scale map storing the color change characteristics of the light receiving region in the target three-dimensional model and the environment map containing the shadow shape information of the target three-dimensional model in the preset illumination direction can be obtained.
The metal reflection intensity information is fused with the shadow rendering information f of the area selection processing, so that the reflection intensity of the metal on the rendered target three-dimensional model is larger; the environment reflection color information corresponding to the reflection area on the target three-dimensional model is fused with the shadow rendering information f of the area selection processing, so that the reflection color of the metal on the rendered target three-dimensional model can be more in line with the environment color in the target game scene; the color level information of the light receiving area in the target three-dimensional model stored in the color level map is fused with the shadow rendering information f of the area selection processing, so that the color of the light receiving area of the rendered target three-dimensional model can show a gradual change effect; the light shadow shape information of the target three-dimensional model in the preset illumination direction and the light shadow rendering information f of the region selection processing are fused, so that the shape of the light reflecting region and the shape of the shadow region on the rendered target three-dimensional model can be more in line with the light shadow shape in the real environment.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a model rendering device corresponding to the model rendering method, and since the principle of solving the problem by the device in the embodiment of the present disclosure is similar to that of the model rendering method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 6, an architecture diagram of a model rendering apparatus according to an embodiment of the disclosure is shown, where the apparatus includes: an acquisition module 601, a generation module 602, a first processing module 603, a second processing module 604, and a third processing module 605; wherein,
The obtaining module 601 is configured to obtain a color map corresponding to the target three-dimensional model;
A generating module 602, configured to generate, based on a preset illumination direction, a material capture map for representing material photosensitive information of the target three-dimensional model in the preset illumination direction and a cube map for representing photosensitive information of the target three-dimensional model in different three-dimensional position areas;
a first processing module 603, configured to perform a first fusion process on the color map and the material capture map, to obtain a first fusion layer;
A second processing module 604, configured to perform a second fusion process on the cube map and the first fusion layer, to obtain a second fusion layer;
And a third processing module 604, configured to perform rendering processing on the target three-dimensional model based on the second fusion layer, so as to obtain a rendered target three-dimensional model.
In an alternative embodiment, the first processing module 603 is specifically configured to:
Respectively carrying out the first fusion processing on sub-color maps corresponding to each patch of the target three-dimensional model in the color maps and the material capturing maps to obtain sub-fusion layers corresponding to each patch;
And integrating the sub-fusion layers corresponding to the patches respectively to obtain the first fusion layer.
In an alternative embodiment, the third processing module 604 is specifically configured to:
Responding to the input photosensitive parameter information, and carrying out photosensitive processing on the second fusion layer based on the photosensitive parameter information to obtain a second fusion layer after photosensitive processing;
and rendering the target three-dimensional model based on the second fused layer after the photosensitive processing to obtain a rendered target three-dimensional model.
In an alternative embodiment, the sensitization parameter information includes at least one of metal shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflection area on the target three-dimensional model.
In an alternative embodiment of the present invention,
The obtaining module 601 is further configured to obtain a tone map that represents a color change characteristic of a light receiving area in the target three-dimensional model;
The third processing module 605 is specifically configured to:
Performing tone scale processing on the second fusion layer based on the tone scale information in the tone scale map to obtain a second fusion layer after the tone scale processing; the color gradation information comprises color values corresponding to the color gradations respectively, duty ratio information of a light receiving area of the color gradations in the target three-dimensional model and color fusion information of adjacent color gradations in the color gradations;
and rendering the target three-dimensional model based on the second fusion layer after the color gradation processing to obtain a rendered target three-dimensional model.
In an alternative embodiment, the obtaining module 601 is further configured to obtain a channel map that represents an area to be rendered in the target three-dimensional model;
the apparatus further comprises:
The first determining module is used for determining a first local fusion layer corresponding to the area to be subjected to material rendering in the first fusion layer based on the channel map;
the second processing module 604 is specifically configured to: performing second fusion processing on the cube map and the first local fusion layer to obtain a second fusion layer;
the apparatus further comprises:
The second determining module is used for determining a second local fusion layer corresponding to the region to be subjected to material rendering in the second fusion layer based on the channel map;
the third processing module 605 is specifically configured to: and rendering the target three-dimensional model based on the second local fusion layer to obtain a rendered target three-dimensional model.
In an alternative embodiment, the obtaining module 601 is further configured to obtain an environmental map that represents a shadow shape of the target three-dimensional model in the preset illumination direction;
The third processing module 605 is specifically configured to:
Performing light and shadow shape processing on the second fusion layer based on the light and shadow shape indicated by the environment map to obtain a second fusion layer after the light and shadow shape processing;
and rendering the target three-dimensional model based on the second fusion layer after the light and shadow shape processing to obtain a rendered target three-dimensional model.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 7, a schematic diagram of a computer device 700 according to an embodiment of the disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is configured to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 701 and data exchanged with the external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the computer device 700 operates, the processor 701 and the memory 702 communicate through the bus 703, so that the processor 701 executes the following instructions:
Obtaining a color map corresponding to the target three-dimensional model;
Generating a material capture map for representing material photosensitive information of the target three-dimensional model in a preset illumination direction and a cube map for representing photosensitive information of different three-dimensional position areas based on the preset illumination direction;
Performing first fusion processing on the color map and the material capturing map to obtain a first fusion layer;
Performing second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer;
and rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the model rendering method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, where instructions included in the program code may be used to perform the steps of the model rendering method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A model rendering method, comprising:
Obtaining a color map corresponding to the target three-dimensional model;
Generating a material capture map for representing material photosensitive information of the target three-dimensional model in a preset illumination direction and a cube map for representing photosensitive information of different three-dimensional position areas based on the preset illumination direction; the photosensitive information of the different three-dimensional position areas comprises shadow area information to be rendered and reflection area information;
Performing first fusion processing on the color map and the material capturing map to obtain a first fusion layer;
Performing second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer;
Rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model;
wherein the method further comprises:
Obtaining a channel map representing a region to be rendered of a material in the target three-dimensional model;
After the first fusion layer is obtained, and before the second fusion processing is performed on the cube map and the first fusion layer, the method further includes:
Determining a first local fusion layer corresponding to the region to be subjected to material rendering in the first fusion layer based on the channel map;
After obtaining the second fusion layer, before rendering the target three-dimensional model, the method further includes:
and determining a second local fusion layer corresponding to the region to be subjected to material rendering in the second fusion layer based on the channel map.
2. The method of claim 1, wherein performing a first fusion process on the color map and the material capture map to obtain a first fused layer comprises:
Respectively carrying out the first fusion processing on sub-color maps corresponding to each patch of the target three-dimensional model in the color maps and the material capturing maps to obtain sub-fusion layers corresponding to each patch;
And integrating the sub-fusion layers corresponding to the patches respectively to obtain the first fusion layer.
3. The method according to claim 1, wherein the rendering the target three-dimensional model based on the second fused layer to obtain a rendered target three-dimensional model includes:
Responding to the input photosensitive parameter information, and carrying out photosensitive processing on the second fusion layer based on the photosensitive parameter information to obtain a second fusion layer after photosensitive processing;
And rendering the target three-dimensional model based on the second fused layer after the photosensitive processing to obtain the rendered target three-dimensional model.
4. The method of claim 3, wherein the sensitization parameter information comprises at least one of metal shadow intensity information, metal reflection intensity information, and ambient reflection color information corresponding to a light reflection area on the target three-dimensional model.
5. The method according to claim 1, wherein the method further comprises:
acquiring a tone scale map representing the color change characteristics of a light receiving area in the target three-dimensional model;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
Performing tone scale processing on the second fusion layer based on the tone scale information in the tone scale map to obtain a second fusion layer after the tone scale processing; the color gradation information comprises color values corresponding to the color gradations respectively, duty ratio information of a light receiving area of the color gradations in the target three-dimensional model and color fusion information of adjacent color gradations in the color gradations;
and rendering the target three-dimensional model based on the second fusion layer after the color gradation processing to obtain the rendered target three-dimensional model.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
And performing a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer, including:
performing the second fusion processing on the cube map and the first local fusion layer to obtain the second fusion layer;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
And rendering the target three-dimensional model based on the second local fusion layer to obtain the rendered target three-dimensional model.
7. The method according to claim 1, wherein the method further comprises:
acquiring an environment map representing the shadow shape of the target three-dimensional model in the preset illumination direction;
And rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model, wherein the rendering comprises the following steps:
Performing light and shadow shape processing on the second fusion layer based on the light and shadow shape indicated by the environment map to obtain a second fusion layer after the light and shadow shape processing;
and rendering the target three-dimensional model based on the second fusion layer after the light and shadow shape processing to obtain the rendered target three-dimensional model.
8. A model rendering apparatus, characterized by comprising:
the acquisition module is used for acquiring the color map corresponding to the target three-dimensional model;
the generation module is used for generating a material capture map used for representing material photosensitive information of the target three-dimensional model in the preset illumination direction and a cube map of photosensitive information in different three-dimensional position areas based on the preset illumination direction; the photosensitive information of the different three-dimensional position areas comprises shadow area information to be rendered and reflection area information;
the first processing module is used for carrying out first fusion processing on the color map and the material capturing map to obtain a first fusion map layer;
the second processing module is used for carrying out second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer;
The third processing module is used for rendering the target three-dimensional model based on the second fusion layer to obtain a rendered target three-dimensional model;
wherein the acquisition module is further used for acquiring a channel map representing a region to be rendered of the material in the target three-dimensional model,
The apparatus further comprises:
The first determining module is used for determining a first local fusion layer corresponding to the area to be subjected to material rendering in the first fusion layer based on the channel map;
And the second determining module is used for determining a second local fusion layer corresponding to the region to be subjected to material rendering in the second fusion layer based on the channel map.
9. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the model rendering method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the model rendering method according to any of claims 1 to 7.
CN202111471457.7A 2021-12-05 2021-12-05 Model rendering method and device, computer equipment and storage medium Active CN114119848B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111471457.7A CN114119848B (en) 2021-12-05 2021-12-05 Model rendering method and device, computer equipment and storage medium
PCT/CN2022/128075 WO2023098358A1 (en) 2021-12-05 2022-10-27 Model rendering method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111471457.7A CN114119848B (en) 2021-12-05 2021-12-05 Model rendering method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114119848A CN114119848A (en) 2022-03-01
CN114119848B true CN114119848B (en) 2024-05-14

Family

ID=80366486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111471457.7A Active CN114119848B (en) 2021-12-05 2021-12-05 Model rendering method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114119848B (en)
WO (1) WO2023098358A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119848B (en) * 2021-12-05 2024-05-14 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium
CN117495995A (en) * 2023-10-26 2024-02-02 神力视界(深圳)文化科技有限公司 Method, device, equipment and medium for generating texture map and model training method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229905A (en) * 2017-05-05 2017-10-03 广州视源电子科技股份有限公司 Method, device and the electronic equipment of lip rendered color
CN108734754A (en) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 Image processing method and device
CN110193193A (en) * 2019-06-10 2019-09-03 网易(杭州)网络有限公司 The rendering method and device of scene of game
CN111627119A (en) * 2020-05-22 2020-09-04 Oppo广东移动通信有限公司 Texture mapping method, device, equipment and storage medium
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112489179A (en) * 2020-12-15 2021-03-12 网易(杭州)网络有限公司 Target model processing method and device, storage medium and computer equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287196A1 (en) * 2016-04-01 2017-10-05 Microsoft Technology Licensing, Llc Generating photorealistic sky in computer generated animation
CN112316420B (en) * 2020-11-05 2024-03-22 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN113034661B (en) * 2021-03-24 2023-05-23 网易(杭州)网络有限公司 MatCap map generation method and device
CN114119848B (en) * 2021-12-05 2024-05-14 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229905A (en) * 2017-05-05 2017-10-03 广州视源电子科技股份有限公司 Method, device and the electronic equipment of lip rendered color
CN108734754A (en) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 Image processing method and device
CN110193193A (en) * 2019-06-10 2019-09-03 网易(杭州)网络有限公司 The rendering method and device of scene of game
CN111627119A (en) * 2020-05-22 2020-09-04 Oppo广东移动通信有限公司 Texture mapping method, device, equipment and storage medium
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112489179A (en) * 2020-12-15 2021-03-12 网易(杭州)网络有限公司 Target model processing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN114119848A (en) 2022-03-01
WO2023098358A1 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN110517355B (en) Ambient composition for illuminating mixed reality objects
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN114119848B (en) Model rendering method and device, computer equipment and storage medium
US7583264B2 (en) Apparatus and program for image generation
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN113658316B (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN112784621B (en) Image display method and device
CN108043027B (en) Storage medium, electronic device, game screen display method and device
WO2023098344A1 (en) Graphic processing method and apparatus, computer device, and storage medium
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
KR100828935B1 (en) Method of Image-based Virtual Draping Simulation for Digital Fashion Design
CN114529657A (en) Rendering image generation method and device, computer equipment and storage medium
CN108230430B (en) Cloud layer mask image processing method and device
WO2019042028A1 (en) All-around spherical light field rendering method
Law et al. Projector placement planning for high quality visualizations on real-world colored objects
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN115131493A (en) Dynamic light special effect display method and device, computer equipment and storage medium
CN117649477B (en) Image processing method, device, equipment and storage medium
CN114519760A (en) Method and device for generating map, computer equipment and storage medium
CN115063522A (en) Model rendering method and device, electronic equipment and storage medium
Xie et al. A Data-Driven Method for Intrinsic Decomposition of 3D City Reconstruction Scene
CN115082615A (en) Rendering method, rendering device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant