CN112116692A - Model rendering method, device and equipment - Google Patents
Model rendering method, device and equipment Download PDFInfo
- Publication number
- CN112116692A CN112116692A CN202010888002.4A CN202010888002A CN112116692A CN 112116692 A CN112116692 A CN 112116692A CN 202010888002 A CN202010888002 A CN 202010888002A CN 112116692 A CN112116692 A CN 112116692A
- Authority
- CN
- China
- Prior art keywords
- map
- dimensional
- information
- target
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 title claims abstract description 78
- 239000007787 solid Substances 0.000 claims abstract description 104
- 230000000694 effects Effects 0.000 claims abstract description 53
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 238000005286 illumination Methods 0.000 claims description 40
- 238000005070 sampling Methods 0.000 claims description 37
- 230000006870 function Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 13
- 239000000463 material Substances 0.000 claims description 12
- 239000000523 sample Substances 0.000 claims description 7
- 230000007613 environmental effect Effects 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 6
- 230000000903 blocking effect Effects 0.000 claims description 5
- 238000003892 spreading Methods 0.000 claims description 2
- 230000007480 spreading Effects 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 abstract description 32
- 230000008569 process Effects 0.000 abstract description 31
- 238000004364 calculation method Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 3
- 230000008676 import Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 241000197727 Euscorpius alpha Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Abstract
The embodiment of the invention provides a method, a device and equipment for rendering a model, wherein the method comprises the following steps: acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of a target object; the two-dimensional shadow image carries shadow information of a target object; combining the two-dimensional shadow image with the hand-drawn solid color mapping to obtain a target solid color mapping of the target object; and outputting the target inherent color map and the normal map to render the three-dimensional model of the target object. According to the method, through the combination of the two-dimensional light and shadow image and the hand-drawn solid color mapping, the light and shadow information is directly added into the target solid color mapping, so that the target solid color mapping replaces Metallic mapping or specialized mapping in the current PRB manufacturing process, the mapping obtaining efficiency can be improved, the performance requirement of a rendering process on equipment can be reduced, the rendering efficiency of a three-dimensional model is improved, and the application range of the mapping is expanded. The display picture of the three-dimensional model can be prevented from being influenced by the PBR effect through the target fixed color mapping.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a model rendering method, a model rendering device and model rendering equipment.
Background
Physical Based Rendering (PBR) technology is a micro-surface object Based Rendering technology. PBR technology is often used in various advanced games and movie production because of its outstanding appearance in simulating the reflection of illumination.
At present, there are two main types of workflows for making PBR maps according to the type of map, namely, a metallurality (Metallic) based workflow and a Specular reflection (Specular) based workflow. In any workflow, modeling is required, a normal map and an inherent color map are output, and a Roughness (Roughness) map, a Metallic map or a spectral map are drawn according to the inherent color map. The shadow, the highlight, the reflection and the Fresnel effect can be calculated in real time according to the physical information provided by the maps, and the reasonable illumination reflection can be simulated for the three-dimensional scene under different illumination conditions. However, the current PBR technology for manufacturing the three-dimensional scene has long working flow, large resource consumption and low scene manufacturing efficiency.
In addition, the mapping of the three-dimensional scene manufactured by the PBR technology cannot be adapted to all devices. For equipment with middle and low-end performance, a large number of PBR maps can cause the display speed of a three-dimensional scene to be slow, and even the phenomenon that the rendering cannot be loaded occurs. If the PBR effect is stopped, the display scene is relatively flat, and the visual difference between the display scene and the three-dimensional scene using the PBR effect is too large.
In summary, how to improve the efficiency of making a map and adapt the map to various devices is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a model rendering method, a model rendering device and model rendering equipment, which are used for improving the model rendering efficiency and enabling a chartlet to be adaptive to various equipment.
In a first aspect, an embodiment of the present invention provides a model rendering method, where the model rendering method includes:
acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of a target object; the two-dimensional shadow image carries shadow information of a target object;
combining the two-dimensional shadow image with the hand-drawn solid color mapping to obtain a target solid color mapping of the target object;
and outputting the target inherent color map and the normal map to render the three-dimensional model of the target object.
In a possible embodiment, merging the two-dimensional light and shadow image with the hand-drawn solid color map to obtain a target solid color map of the target object, includes:
combining the two-dimensional shadow image and the hand-drawn solid color map and inputting the two-dimensional shadow image and the hand-drawn solid color map into an RGBA channel to obtain a target solid color map; the RGBA channel comprises a light and shadow component Alpha channel used for bearing a two-dimensional light and shadow image.
In a possible embodiment, after outputting the target fixed color map and the normal map, the method further includes: and rendering the three-dimensional model through the target inherent color map and the normal map.
In one possible embodiment, rendering the three-dimensional model by the target inherent color map and the normal map comprises: calculating the normal map to obtain normal information of the target object; calculating the target intrinsic color map to obtain the environment reflected light information and color information of the target object; and rendering a three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
In one possible embodiment, the RGBA channels of the target solid color map include Alpha channels for carrying two-dimensional shadow images.
Calculating the normal map to obtain the normal information of the target object, including: the normal map is sampled to import normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model.
Calculating the target fixed color map to obtain the environment reflected light information and the color information of the target object, wherein the method comprises the following steps: and sampling the target fixed color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV expansion layer of the three-dimensional model.
Rendering a three-dimensional model by adopting normal information, environment reflected light information and color information, comprising the following steps: and generating the three-dimensional model with the color effect and the light and shadow effect based on the UV expansion layer of the three-dimensional model.
In one possible embodiment, sampling the target intrinsic color map to guide the ambient reflection light information carried in the Alpha channel into the UV spreading layer of the three-dimensional model comprises: sampling an Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing indirect diffuse reflection illumination and specular reflection illumination of the target object through the material environment blocking map, and introducing a processing result as environment reflection light information into a UV expansion layer of the three-dimensional model.
In a possible embodiment, calculating the target inherent color map to obtain the color information of the target object includes: and simulating color information carried in the RGB channel through the function, and introducing the function simulation value into a UV expansion layer of the three-dimensional model.
In one possible embodiment, the target inherent color map comprises a two-dimensional map carrying ambient reflected light information. Rendering the three-dimensional model through the target inherent color map and the normal map, comprising: if the physical rendering effect is stopped in the running environment of the target object, sampling the target fixed color map to guide color information carried in the RGB channel and environment reflection light information carried in the two-dimensional map into a UV expansion layer of the three-dimensional model; sampling the normal map so as to lead the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model; and generating the three-dimensional model with the color effect and the light and shadow effect based on the UV expansion layer of the three-dimensional model.
In one possible embodiment, the two-dimensional shadow image is a two-dimensional gray scale map, which includes highlight region information of the target object.
In one possible embodiment, the two-dimensional shadow image is stored in a reflection source of the reflection probe.
In a second aspect, an embodiment of the present invention provides a model rendering apparatus, where the model rendering apparatus includes:
the acquisition module is used for acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of the target object; the two-dimensional shadow image carries shadow information of a target object;
the merging module is used for merging the two-dimensional light and shadow image and the hand-drawn solid color mapping to obtain a target solid color mapping of the target object;
and the output module is used for outputting the target inherent color map and the normal map so as to render the three-dimensional model of the target object.
In a possible embodiment, the merging module is specifically configured to: combining the two-dimensional shadow image and the hand-drawn solid color map and inputting the two-dimensional shadow image and the hand-drawn solid color map into an RGBA channel to obtain a target solid color map; the RGBA channel comprises a light and shadow component Alpha channel used for bearing a two-dimensional light and shadow image.
In a possible embodiment, the model rendering apparatus further includes a rendering module, configured to render the three-dimensional model through the target fixed color map and the normal map after the output module outputs the target fixed color map and the normal map.
In a possible embodiment, the rendering module is specifically configured to: calculating the normal map to obtain normal information of the target object; calculating the target intrinsic color map to obtain the environment reflected light information and color information of the target object; and rendering a three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
In one possible embodiment, the RGBA channels of the target solid color map include Alpha channels for carrying two-dimensional shadow images.
The rendering module calculates the normal map, and when acquiring the normal information of the target object, the rendering module is specifically configured to: the normal map is sampled to import normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model.
The rendering module calculates the target inherent color map, and is specifically configured to, when obtaining the environment reflected light information and the color information of the target object: and sampling the target fixed color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV expansion layer of the three-dimensional model.
The rendering module adopts the normal information, the environment reflected light information and the color information, and is specifically used for: and generating the three-dimensional model with the color effect and the light and shadow effect based on the UV expansion layer of the three-dimensional model.
In a possible embodiment, the rendering module calculates the target inherent color map, so that when the ambient reflection light information carried in the Alpha channel is guided into the UV expansion layer of the three-dimensional model, the rendering module is specifically configured to: sampling an Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing indirect diffuse reflection illumination and specular reflection illumination of the target object through the material environment blocking map, and introducing a processing result as environment reflection light information into a UV expansion layer of the three-dimensional model.
In a possible embodiment, the rendering module calculates the target inherent color map, and when obtaining the color information of the target object, is specifically configured to: and simulating color information carried in the RGB channel through the function, and introducing the function simulation value into a UV expansion layer of the three-dimensional model.
In one possible embodiment, the target inherent color map comprises a two-dimensional map carrying ambient reflected light information. When the rendering module renders the three-dimensional model through the target inherent color map and the normal map, the rendering module is specifically configured to: if the physical rendering effect is stopped in the running environment of the target object, sampling the target fixed color map to guide color information carried in the RGB channel and environment reflection light information carried in the two-dimensional map into a UV expansion layer of the three-dimensional model; sampling the normal map so as to lead the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model; and generating the three-dimensional model with the color effect and the light and shadow effect based on the UV expansion layer of the three-dimensional model.
In one possible embodiment, the two-dimensional shadow image is a two-dimensional gray scale map, which includes highlight region information of the target object.
In one possible embodiment, the two-dimensional shadow image is stored in a reflection source of the reflection probe.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores executable code thereon, and when the executable code is executed by the processor, the processor is enabled to implement at least the model rendering method described above.
An embodiment of the present invention further provides a system, including a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, at least one program, a set of codes, or a set of instructions is loaded and executed by the processor to implement the model rendering method described above.
An embodiment of the present invention provides a computer-readable medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the model rendering method described above.
An embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the model rendering method of the first aspect.
In the technical scheme provided by the embodiment of the invention, for the target object to be rendered, the hand-drawn solid color map, the normal map and the two-dimensional shadow image of the target object are obtained. Because the light and shadow information such as highlight information, shadow information and the like can be directly added into the hand-drawn solid color map, the two-dimensional light and shadow image bearing the light and shadow information of the target object can be directly merged with the hand-drawn solid color map to obtain the target solid color map of the target object. Because the target solid color map has the solid color information and the light and shadow information of the target object, the target solid color map can replace a solid color map in the current PRB manufacturing process, can also be used for realizing the calculation of subsequent light and shadow information and replacing a Metallic map or a Specular map in the current PRB manufacturing process, thereby avoiding the complex manufacturing process of the Metallic map and the Specular map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map obtaining time and improving the map obtaining efficiency. Moreover, the rendering of the three-dimensional model of the target object can be realized by finally outputting the target inherent color map and the normal map, a plurality of PRB maps in the current PRB manufacturing flow are not needed, the phenomenon that the loading and rendering speed of the three-dimensional model is slow and even the rendering cannot be loaded due to the excessive number of the PRB maps is avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the maps is expanded. Through the combination of the two-dimensional shadow image and the hand-drawn solid color paste picture, the display picture of the three-dimensional model can not have too large difference before and after the PBR effect is closed, and the display picture of the three-dimensional model is effectively prevented from being influenced by the PBR effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a model rendering method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a three-dimensional model rendering process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another three-dimensional model provided in accordance with an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a model rendering apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device corresponding to the model rendering apparatus provided in the embodiment shown in fig. 5.
Detailed Description
The content of the invention will now be discussed with reference to a number of exemplary embodiments. It is to be understood that these examples are discussed only to enable those of ordinary skill in the art to better understand and thus implement the teachings of the present invention, and are not meant to imply any limitations on the scope of the invention.
As used herein, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to. The term "based on" is to be read as "based, at least in part, on". The terms "one embodiment" and "an embodiment" are to be read as "at least one embodiment". The term "another embodiment" is to be read as "at least one other embodiment".
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The model rendering scheme provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a server. The server may be a physical server including an independent host, or may also be a virtual server carried by a host cluster, or may also be a cloud server. The electronic device may also be a terminal device such as a tablet computer, PC, notebook computer, etc.
The model rendering scheme provided by the embodiment of the invention is suitable for mapping of various objects. Various objects include, for example, texture material, solid objects, virtual scenes. The entity object is a character and a prop in the game. The maps of various objects are mainly used for rendering processes of the respective objects.
In fact, in the conventional charting process, three-dimensional modeling (including the steps of initial mold, high mold, low mold normal, etc.), baking AO normal, transferring out the conventional fixed color charting, adding various details, etc. are required to make the conventional fixed color charting and normal charting. Certainly, in order to enable the three-dimensional model to show the light and shadow effect, the highlight map can be manufactured based on traditional solid color map decoloring. However, the traditional map has complex manufacturing process, poor performance in the rendering process and poor visual experience.
At present, in order to improve the visual experience of users, a PBR mapping process based on the PBR technology is also provided. There are two main categories of partitioning according to the type of final output map, namely Metallic-based workflow and Specular-based workflow. No matter what kind of workflow, three-dimensional modeling, normal map making, solid map drawing, roughnessmap drawing, Metallic map drawing or Specular map drawing and other steps are required. The workflow based on Metallic generally outputs traditional inherent color mapping, normal mapping, Roughness mapping and Metallic mapping; and the work flow based on Specular generally outputs traditional fixed color maps, normal maps, Roughhness maps and Specular maps. In the rendering process, shadow, highlight, reflection and Fresnel effect can be calculated in real time according to physical information provided by the maps, and reasonable illumination reflection is simulated for three-dimensional scenes under different illumination conditions. However, the problems of long working process, large resource consumption, low scene making efficiency and the like still exist when the PBR technology is adopted to make the three-dimensional scene at present.
In addition, because the number of PBR maps is large, there are performance requirements for devices that use PBR techniques for rendering. For equipment with middle and low-end performance, a large number of PBR maps can cause the display speed of a three-dimensional scene to be slow, and even the phenomenon that the rendering cannot be loaded occurs. If the PBR effect is stopped, the display scene is relatively flat, and the visual difference between the display scene and the three-dimensional scene using the PBR effect is too large. Therefore, the three-dimensional scene manufactured by the existing PBR technology cannot be adapted to all devices.
In summary, how to improve the efficiency of making a map and adapt the map to various devices is a technical problem to be solved urgently.
In order to solve at least one technical problem, a core idea of a model rendering scheme provided by an embodiment of the present invention is:
and for a target object to be rendered, acquiring a hand-drawn fixed color map, a normal map and a two-dimensional shadow image of the target object. Because the light and shadow information such as highlight information, shadow information and the like can be added in the hand-drawn solid color map, the two-dimensional light and shadow image carrying the light and shadow information of the target object can be directly merged with the hand-drawn solid color map to obtain the target solid color map of the target object. Because the target solid color map has the solid color information and the light and shadow information of the target object, the target solid color map can replace the solid color map in the current PRB manufacturing process, can also be used for realizing the calculation of subsequent light and shadow information and replacing a Metallic map or a Specular map in the current PRB manufacturing process, thereby effectively avoiding the complex manufacturing process of the Metallic map and the Specular map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map obtaining time and improving the map obtaining efficiency. Moreover, the rendering of the three-dimensional model of the target object can be realized by finally outputting the target inherent color map and the normal map, a plurality of PRB maps in the current PRB manufacturing flow are not needed, the phenomenon that the loading and rendering speed of the three-dimensional model is slow and even the rendering cannot be loaded due to the excessive number of the PRB maps is avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the maps is expanded.
In addition, through the combination of the two-dimensional light and shadow image bearing the light and shadow information of the target object and the hand-drawn solid color map, the target solid color map has the solid color information and the light and shadow information of the target object, so that the display picture of the three-dimensional model of the target object does not have too large difference before and after the PBR effect is closed, and the display picture of the three-dimensional model is effectively prevented from being influenced by the PBR effect.
Having described the basic concepts of the model rendering scheme, various non-limiting embodiments of the present invention are described in detail below.
The following describes the implementation of the model rendering method with reference to the following embodiments.
Fig. 1 is a flowchart of a model rendering method according to an embodiment of the present invention. As shown in fig. 1, the model rendering method includes the following steps:
101. acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of a target object; the two-dimensional shadow image carries shadow information of the target object.
102. And combining the two-dimensional light and shadow image with the hand-drawn solid color mapping to obtain a target solid color mapping of the target object.
103. And outputting the target inherent color map and the normal map to render the three-dimensional model of the target object.
In the model rendering method shown in fig. 1, the target inherent color map can replace not only the inherent color map in the current PRB manufacturing process, but also Metallic map or Specular map in the current PRB manufacturing process, so as to avoid the complex manufacturing process of Metallic map and Specular map in the current PRB manufacturing process. The final output target inherent color map and the final output normal map can realize the rendering of the three-dimensional model of the target object, a plurality of PRB maps in the current PRB manufacturing flow are not needed, the performance requirement of the rendering process on equipment is reduced, the display speed of the three-dimensional model is improved, and the application range of the maps is expanded.
To render the three-dimensional model of the target object, a map (texture) for rendering the three-dimensional model needs to be acquired first. In practical applications, the target object may be a virtual scene. Or the target object may be a character or object in the virtual scene. It will be appreciated that the map may be a two-dimensional image, primarily used to present model surface information so that the device may render the appearance of the model by the map. In practical applications, there are various types of maps. Different types of maps are used to present different types of surface information. The various maps involved will be described below, and will not be expanded here.
In order to obtain a map for rendering a three-dimensional model, in 101, a hand-drawn solid color map, a normal map, and a two-dimensional shadow image of a target object need to be obtained first.
The inherent color is a color that a target object appears under a normal light source, so that the inherent color map in the conventional map making process and the PRB making process usually only contains color information. Different from the fixed color mapping in the traditional mapping process and the PRB manufacturing process, the hand-drawn fixed color mapping of the target object can also be added with highlight information, shadow information and other light and shadow information besides color information. In practical applications, the hand-drawn solid color map is implemented as a hand-drawn diffuse reflection map, for example. Optionally, in order to add light and shadow information, a light and shadow component channel (i.e. Alpha channel) for carrying the light and shadow information is also created in the hand-drawn solid color map. Therefore, the target fixed color map containing the fixed color information and the light and shadow information can be manufactured by applying the hand-drawn fixed color map, so that a Metallic map or a specialized map in the current PRB manufacturing process is replaced, and the manufacturing process of the map is simplified.
The Normal Mapping (Normal Mapping) is mainly used to describe the concave-convex information of the surface of the object. The baking process of the normal map is actually to make a normal line at each point of the concave-convex surface of the target object, and the direction of the normal line is marked by a color channel. Each texel in the normal map represents the direction of the surface normal vector. The concave-convex shape of the surface of the object can be finely rendered by utilizing the normal map, so that the rendered surface of the object has more accurate illumination direction and reflection effect.
The two-dimensional light and shadow image of the target object is mainly used for bearing light and shadow information of the target object. In practical applications, the two-dimensional shadow image may be a two-dimensional gray scale image produced based on the target object. The two-dimensional gray scale map includes highlight region information of the target object. In order to simplify the manufacturing process, a hand-drawn grayscale image can be selected as a two-dimensional shadow image.
Alternatively, the two-dimensional shadow image may be stored in a Reflection source (Cube Map) of a Reflection Probe (Reflection Probe). The reflection probe is mainly used for controlling reflection information of light rays in a scene. Cube maps are commonly used as a reflection source for objects with reflective properties. Cube Map may be a collection of multiple independent square textures that can be combined and mapped into a single texture.
Further, after the hand-drawn solid color map, the normal map and the two-dimensional light and shadow image of the target object are obtained, the two-dimensional light and shadow image and the hand-drawn solid color map are merged at 102 to obtain a target solid color map of the target object, and finally the target solid color map and the normal map are output at 103 to render a three-dimensional model of the target object.
As can be seen from the above description, the light and shadow information can be directly added to the hand-drawn solid color map of the target object, and the two-dimensional light and shadow image contains the light and shadow information of the target object. Thus, the two-dimensional light and shadow image may be merged with the hand-drawn solid color map to obtain a target solid color map having light and shadow information of the target object.
Specifically, the two-dimensional light and shadow image and the hand-drawn solid color map are merged to obtain a target solid color map of the target object, which can be implemented as follows: and combining the two-dimensional shadow image and the hand-drawn solid color map and inputting the two-dimensional shadow image and the hand-drawn solid color map into a color channel (RGBA channel) to obtain a target solid color map.
Specifically, on the basis of three color component channels contained in the hand-drawn fixed color map, a light and shadow component channel (i.e., Alpha channel) for carrying light and shadow information is created. In the Alpha channel, the light and shadow information of the target object is created by drawing a two-dimensional light and shadow image (e.g., a two-dimensional grayscale image). For example, a two-dimensional gray scale map is rendered and stored into the Alpha channel of the RGBA channel. In another example, this process may also be implemented by a color Look-Up Table (LUT). If the two-dimensional gray scale map is drawn, the drawn two-dimensional gray scale map is stored into the color lookup table in a color gamut conversion mode, and then the color lookup table is stored into an Alpha channel of the RGBA channel.
Optionally, after the two-dimensional light and shadow image is merged and input to the Alpha channel of the hand-drawn solid color map, calculation may be performed based on light and shadow information of the target object carried in the Alpha channel to obtain environment diffuse reflection information of the target object, and the environment diffuse reflection information of the target object is merged into the target solid color map. The ambient diffuse reflectance information may be implemented as an ambient diffuse reflectance map. To adapt to devices of various capabilities, the ambient diffuse reflection map may be implemented as a two-dimensional map, e.g. a two-dimensional map carrying ambient reflected light information.
As can be seen from the above description, in practical applications, the two-dimensional gray scale map may be a hand-drawn gray scale map. Because the hand-drawn gray-scale image carries the light and shadow information represented by various gray-scale values, the hand-drawn gray-scale image and the hand-drawn fixed color mapping image are combined, so that target objects with light and shadow effects can be rendered by devices with different performances, and the influence of PBR (peripheral component distance) effects on display pictures of the three-dimensional model can be avoided.
In practice, the RGBA channel includes three color component channels and an Alpha channel. The three color component channels are Red (R, R) channel, Green (G) channel and Blue (B) channel, and are mainly used for bearing color information in the hand-drawn solid color sticker.
Optionally, the color information carried by the RGB channel is converted into a two-dimensional map, so that the color information of the target object can be read by sampling the two-dimensional map. Or, the color information carried by the RGB channel may also be converted into a color lookup table, so that the color information of the target object may be obtained by querying the color lookup table. In fact, the process of converting the color information carried by the RGB channels into a color lookup table is: the color information is mapped from the color space in which the RGB channels are located to the color space in which the color look-up table is located. By the color information conversion and storage mode, the performance of various devices is adapted, the rendering efficiency is improved, and the storage space is saved.
Of course, the color channels may also be referred to as color spaces, composite color channels, and are not limited herein. Whatever the designation, it is in fact the type of information carried in the RGBA channel that is to be emphasized.
The RGBA channels include a shadow component channel, i.e., an Alpha channel, for carrying a two-dimensional shadow image. Optionally, a two-dimensional gray scale map for describing light and shadow information of the target object is stored in the Alpha channel.
Specifically, the lighter the color of a pixel in the two-dimensional grayscale image is, the smaller the element value of the Alpha channel corresponding to the pixel is, and the stronger the brightness to be described by the element value is; the darker the color of a pixel in the two-dimensional gray scale image, the larger the element value of the Alpha channel corresponding to the pixel, and the weaker the brightness to be described by the element value. For example, assume that the element values of Alpha channels range from 0 to 255. Based on this, the element value of the Alpha channel is 0, and the brightness to be described by the element value is strongest; the element value of the Alpha channel is 255, which has the weakest brightness to be described.
Finally, after the target solid color map of the target object is obtained by combining the two-dimensional light and shadow image and the hand-drawn solid color map, the target solid color map and the normal map can be output for rendering the three-dimensional model of the target object. Thus, because the target solid color map has the light and shadow information of the target object, the three-dimensional model with the color effect and the light and shadow effect can be rendered through the target solid color map and the normal map.
Optionally, after the target inherent color map and the normal map are output in 103, the three-dimensional model is rendered through the target inherent color map and the normal map. It is assumed that the RGBA channels of the target solid color map include Alpha channels for carrying two-dimensional shadow images. How to render the three-dimensional model by the target inherent color map and the normal map is described below with reference to specific examples.
Fig. 2 is a schematic diagram of a three-dimensional model rendering process according to an embodiment of the present invention. As shown in fig. 2, the process of rendering the three-dimensional model includes the following steps:
201. and calculating the normal map to obtain the normal information of the target object.
In particular, the normal map is sampled to import normal vector information of the target object carried by the normal map into the UV expansion layer of the three-dimensional model. For example, assuming that the normal vector information is normal vector information, the normal vector information is typically encoded in the RGB color channels of the texture of the normal map. Thus, the process of sampling the normal map may be: acquiring coordinate information from the texture of the normal map, calculating the code of the normal vector information based on the coordinate information, and introducing the calculated code into the UV expansion layer of the three-dimensional model according to the corresponding relation between the normal map and the UV expansion layer of the three-dimensional model.
202. And calculating the target intrinsic color map to obtain the environmental reflected light information and the color information of the target object.
In fact, the environment reflected light information environment of the target object may be calculated by an environment Bidirectional reflection Distribution Function (environment BRDF) to obtain the environment reflected light information of the target object, such as Image Based Illumination (IBL). The ambient bi-directional reflection distribution function can be expressed as follows:
where l denotes the incident direction and v denotes the viewing direction. In fact, in order to facilitate the mobile terminal device to obtain the environmental reflected light information of the target object in real time, the above environmental bidirectional reflection distribution function may be simplified as follows: the ambient reflection light information of the target object is the product of the LD term and the DFG term, and is denoted as envSpecular (LD × DFG). The LD term is the result of a summation calculation of incident light. The LD item may be implemented as a two-dimensional grayscale map of light and shadow information of the target object. The DFG entry may be implemented as a pre-computed two-dimensional map. Therefore, the process of acquiring the environmental reflected light information of the target object is greatly simplified through the process of sampling the two-dimensional gray-scale map and the two-dimensional map.
Based on the simplified thought, the target intrinsic color map is calculated to obtain the environmental reflected light information and the color information of the target object, and the method can be realized as follows:
and sampling the target fixed color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV expansion layer of the three-dimensional model.
In an alternative embodiment, sampling the target inherent color map to guide the ambient reflection light information carried in the Alpha channel into the UV expansion layer of the three-dimensional model may be implemented as:
sampling an Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing indirect diffuse reflection illumination and specular reflection illumination of the target object through the material environment blocking map, and introducing a processing result as environment reflection light information into a UV expansion layer of the three-dimensional model.
Assume that a two-dimensional grayscale map for describing light and shadow information of a target object is stored in the Alpha channel. Assume that a two-dimensional grayscale Map is stored in Cube Map. Based on the above assumption, Cube Map is sampled by a material sampling function (tex tube), and Indirect Diffuse reflected illumination (index spread) and Specular reflected illumination (Specular) of the target object are output. In practice, the texCUBElod function is a sampling function of the CUBE material. If the device for rendering the target object does not support this sampling function, another sampling function (texCUBEbias) may be used to implement the above function. Therefore, the process of calculating the environment reflected light information of the target object is actually a process of sampling the two-dimensional gray scale image, and the calculation process of the environment reflected light information is greatly simplified.
Due to the fact that the sampling efficiency of the texCUBELod function is low, the two-dimensional gray scale map can be selected for obtaining the environment reflected light information of the target object for middle and low-end performance equipment directly, calculation cost caused by the texCUBELod function is avoided, and calculation efficiency of the shadow information is improved.
Furthermore, in order to further optimize the Ambient light reflection information and improve the rendering effect of the light and shadow information, an Ambient Occlusion (AO) map may be used to optimize indirect diffuse illumination and specular illumination of the target object, and the processing result may be introduced as Ambient light reflection information into the UV expansion layer of the three-dimensional model.
Wherein the AO map is used to represent an area of the target object that is occluded by a portion of light, such as a shadow area. Thus, the AO map may also express shadow information. AO maps are typically generated by baking three-dimensional models. In practical applications, the AO map includes the material AO provided by the material map and the Screen space Environment Occlusion Special Effect (SSAO). Optionally, the AO map for optimizing processing of indirect diffuse reflected illumination and specular reflected illumination may be the material AO. For example, in unity, the texture AO may be used for optimization when computing indirect diffuse illumination using a Standard Shader (Standard Shader).
In practice, the process may be implemented as: multiplying the calculated value of the indirect diffuse reflection illumination by the AO map to obtain an optimized indirect diffuse reflection illumination value; and multiplying the calculated value of the specular reflection illumination by the AO map to obtain an optimized specular reflection illumination value. Because the material AO has low requirements on the performance of the equipment, the optimization mode can be more widely adapted to various equipment. Such as this optimization is adapted to mobile devices of medium to low performance.
Therefore, by sampling the two-dimensional shadow image in the Alpha channel, the environment reflected light information of the target object can be calculated, the calculation process of the environment reflected light information is greatly simplified, and the rendering efficiency of the shadow effect is improved.
In another embodiment, for the RGB channel, sampling the target intrinsic color map to introduce the color information carried by the RGB channel into the UV expansion layer of the three-dimensional model may be implemented as: it is assumed that the color information carried by the RGB channels is stored in a color look-up table. And performing color gamut conversion treatment on the element values in the color lookup table one by one according to the corresponding relation between each element in the color lookup table and each element in the UV expansion layer of the three-dimensional model to obtain the corresponding element values in the UV expansion layer of the three-dimensional model.
In another embodiment, for the color information carried by the RGB channel, the color information is introduced into the UV expansion layer of the three-dimensional model, and the method may further include: and simulating color information carried in the RGB channel through the function, and introducing the function simulation value into a UV expansion layer of the three-dimensional model.
For example, for the mobile terminal device, a function for simulating a color lookup table may be preset, and then, a conversion result of performing color gamut conversion through the color lookup table is simulated through the function, and the simulated conversion result is imported into the UV expansion layer of the three-dimensional model.
Therefore, by means of the function simulation color lookup table, the color information can be led into the UV expansion layer of the three-dimensional model without sampling RGB channels, and the requirement of the table lookup process on the bandwidth is favorably reduced.
203. And rendering a three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
Specifically, a three-dimensional model having a color effect and a light and shadow effect is generated based on a UV developed layer of the three-dimensional model. In practice, a UV map is a planar representation of the surface texture of a three-dimensional model. U refers to the horizontal axis of a two-dimensional space (i.e., a plane) and V refers to the vertical axis of a two-dimensional space. The process of creating a UV map is called UV unfolding. After normal vector information, indirect diffuse reflection illumination, specular reflection illumination and color information are led into a UV expansion layer of the three-dimensional model, the three-dimensional model with the color effect and the light and shadow effect is baked through a UV mapping in the UV expansion layer of the three-dimensional model.
The three-dimensional model of the target object can be rendered based on the target solid color map and the normal map through the above steps 201 to 203.
In another alternative embodiment, it is assumed that the target inherent color map comprises a two-dimensional map carrying ambient reflected light information. The three-dimensional model is rendered through the target inherent color map and the normal map, and the method can also be realized as follows: if the physical rendering effect is stopped in the running environment of the target object, sampling the target fixed color map to guide color information carried in the RGB channel and the environment reflection light information carried in the two-dimensional map into a UV expansion layer of the three-dimensional model; sampling the normal map so as to lead the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model; and generating the three-dimensional model with the color effect and the light and shadow effect based on the UV expansion layer of the three-dimensional model. Therefore, the phenomenon of overlarge contrast of the three-dimensional model display effect before and after the PBR effect is closed can be further avoided, and the visual experience of a user is improved. In practical applications, the two-dimensional map carrying the ambient reflection light information may be an ambient light diffuse reflection map. The two-dimensional map at least contains shadow information of the target object.
The following describes the execution processes of the model rendering method shown in fig. 1 and the three-dimensional model rendering method shown in fig. 2, taking the target object as a three-dimensional scene as an example:
the target object is assumed to be a three-dimensional scene. Assume that a first device is used to obtain a map and a second device is used to render a three-dimensional model of a target object.
Based on the assumption, the first device obtains a hand-drawn solid color map, a normal map and a two-dimensional light and shadow image of the three-dimensional scene, combines the two-dimensional light and shadow image and the hand-drawn solid color map into a target solid color map of the three-dimensional scene, and outputs the target solid color map to the second device. The target solid color map of the three-dimensional scene contains the light and shadow information of the three-dimensional scene. And after receiving the target fixed color map and the normal map of the three-dimensional scene, the second equipment calculates the normal map to acquire normal information of the three-dimensional scene, calculates the target fixed color map and acquires environment reflected light information and color information of the three-dimensional scene. Thus, the second device may render a three-dimensional model using the normal information, the ambient reflected light information, and the color information, as shown in fig. 3.
The second device is assumed to be a medium and low end device. The second device may also turn off rendering effects based on the ambient reflected light information. In this way, after receiving the target fixed color map and the normal map of the three-dimensional scene, the second device calculates the normal map to obtain normal information of the three-dimensional scene, calculates the target fixed color map, and obtains color information and shadow information of the three-dimensional scene. Optionally, the target inherent color map comprises a two-dimensional map carrying shadow information. Thus, the second device may render a three-dimensional model using the normal information, the color information, and the shadow information, as shown in fig. 4.
In addition, it can be found from the three-dimensional models shown in fig. 3 and 4 that the three-dimensional model can be prevented from being affected by the PBR effect by hand-drawing and fixing the color map, so that the map can be adapted to various devices, and the application range of the map can be further expanded.
In the model rendering method shown in fig. 1, a hand-drawn solid color map, a normal map, and a two-dimensional light and shadow image of a target object to be rendered are obtained for the target object. Because the light and shadow information such as highlight information, shadow information and the like can be directly added into the hand-drawn solid color map, the two-dimensional light and shadow image bearing the light and shadow information of the target object can be directly merged with the hand-drawn solid color map to obtain the target solid color map of the target object. Because the target solid color map has the solid color information and the light and shadow information of the target object, the target solid color map can replace a solid color map in the current PRB manufacturing process, can also be used for realizing the calculation of subsequent light and shadow information and replacing a Metallic map or a Specular map in the current PRB manufacturing process, thereby avoiding the complex manufacturing process of the Metallic map and the Specular map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map obtaining time and improving the map obtaining efficiency. Moreover, the rendering of the three-dimensional model of the target object can be realized by finally outputting the target inherent color map and the normal map, a plurality of PRB maps in the current PRB manufacturing flow are not needed, the phenomenon that the loading and rendering speed of the three-dimensional model is slow and even the rendering cannot be loaded due to the excessive number of the PRB maps is avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the maps is expanded. Through the combination of the two-dimensional shadow image and the hand-drawn solid color paste picture, the display picture of the three-dimensional model can not have too large difference before and after the PBR effect is closed, and the display picture of the three-dimensional model is effectively prevented from being influenced by the PBR effect.
The model rendering apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that each of these model rendering devices may be constructed using commercially available hardware components configured by the steps taught by the present solution.
Fig. 5 is a schematic structural diagram of a model rendering apparatus according to an embodiment of the present invention, and as shown in fig. 5, the model rendering apparatus includes: a communication module 11, a generation module 12 and an editing module 13.
The acquisition module 11 is configured to acquire a hand-drawn solid color map, a normal map, and a two-dimensional shadow image of a target object; the two-dimensional shadow image bears shadow information of the target object;
a merging module 12, configured to merge the two-dimensional light and shadow image with the hand-drawn solid color map to obtain a target solid color map of the target object;
and the output module 13 is configured to output the target inherent color map and the normal map to render a three-dimensional model of the target object.
Optionally, the merging module 12 is specifically configured to: combining the two-dimensional shadow image and the hand-drawn solid color map and inputting the two-dimensional shadow image and the hand-drawn solid color map into an RGBA channel to obtain a target solid color map; wherein the RGBA channels comprise a shadow component Alpha channel for carrying the two-dimensional shadow image.
Optionally, the model rendering apparatus further includes a rendering module, configured to render the three-dimensional model through the target fixed color map and the normal map after the output module outputs the target fixed color map and the normal map.
Optionally, the rendering module is specifically configured to: calculating the normal map to obtain normal information of the target object; calculating the target fixed color map to obtain the environment reflected light information and the color information of the target object; and rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
Optionally, the RGBA channels of the target solid color map include Alpha channels for carrying the two-dimensional shadow image.
The rendering module is configured to calculate the normal map, and when obtaining the normal information of the target object, specifically: and sampling the normal map so as to guide the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model.
The rendering module is configured to calculate the target inherent color map, and when obtaining the environment reflected light information and the color information of the target object, specifically: sampling the target fixed color map to guide the environment reflection light information carried in the Alpha channel and the color information carried in the RGB channel into a UV expansion layer of the three-dimensional model.
The rendering module is configured to, when rendering the three-dimensional model by using the normal information, the environment reflected light information, and the color information, specifically: generating the three-dimensional model with a color effect and a light and shadow effect based on a UV expansion layer of the three-dimensional model.
Optionally, the rendering module calculates the target inherent color map, so that when the environment reflected light information carried in the Alpha channel is guided into a UV expansion layer of the three-dimensional model, the rendering module is specifically configured to: sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of the target object; and processing indirect diffuse reflection illumination and specular reflection illumination of the target object through a material environment blocking map, and introducing a processing result as the environment reflection light information into a UV expansion layer of the three-dimensional model.
Optionally, the rendering module calculates the target inherent color map, and when obtaining the color information of the target object, the rendering module is specifically configured to: and simulating the color information carried in the RGB channel through a function, and introducing a function simulation value into a UV expansion layer of the three-dimensional model.
Optionally, the target inherent color map comprises a two-dimensional map carrying ambient reflected light information. When the rendering module renders the three-dimensional model through the target inherent color map and the normal map, the rendering module is specifically configured to: if the rendering effect based on physics is stopped being used in the running environment where the target object is located, sampling the target fixed color map so as to guide the color information carried in the RGB channel and the environment reflection light information carried in the two-dimensional map into a UV expansion layer of the three-dimensional model; sampling the normal map so as to guide the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model; generating the three-dimensional model with a color effect and a light and shadow effect based on a UV expansion layer of the three-dimensional model.
Optionally, the two-dimensional shadow image is a two-dimensional grayscale map, and the two-dimensional grayscale map includes highlight region information of the target object.
Optionally, the two-dimensional shadow image is stored in a reflection source of a reflection probe.
The model rendering apparatus shown in fig. 5 may execute the methods provided in the foregoing embodiments, and parts not described in detail in this embodiment may refer to the related descriptions of the foregoing embodiments, which are not described herein again.
In one possible design, the structure of the model rendering apparatus shown in fig. 5 may be implemented as an electronic device. As shown in fig. 6, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 has stored thereon executable code which, when executed by the processor 21, at least makes the processor 21 capable of implementing a model rendering method as provided in the preceding embodiments. The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
Additionally, embodiments of the present invention provide a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of a wireless router, causes the processor to perform the model rendering methods provided in the foregoing embodiments.
The above-described apparatus embodiments are merely illustrative, wherein the various modules illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The system, method and apparatus of the embodiments of the present invention can be implemented as pure software (e.g., a software program written in Java), as pure hardware (e.g., a dedicated ASIC chip or FPGA chip), or as a system combining software and hardware (e.g., a firmware system storing fixed code or a system with a general-purpose memory and a processor), as desired.
Another aspect of the present invention is a computer-readable medium having computer-readable instructions stored thereon that, when executed, may implement the entity handoff method of embodiments of the present invention.
While various embodiments of the present invention have been described above, the above description is intended to be illustrative, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The scope of the claimed subject matter is limited only by the attached claims.
Claims (14)
1. A method of model rendering, the method comprising:
acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of a target object; the two-dimensional shadow image bears shadow information of the target object;
merging the two-dimensional shadow image and the hand-drawn solid color mapping to obtain a target solid color mapping of the target object;
and outputting the target fixed color map and the normal map to render the three-dimensional model of the target object.
2. The method of claim 1, wherein the merging the two-dimensional shadow image with the hand-drawn solid color map to obtain a target solid color map of the target object comprises:
combining the two-dimensional shadow image and the hand-drawn solid color map and inputting the two-dimensional shadow image and the hand-drawn solid color map into an RGBA channel to obtain a target solid color map;
wherein the RGBA channels comprise a shadow component Alpha channel for carrying the two-dimensional shadow image.
3. The method of claim 1, wherein after outputting the target fixed color map and the normal map, further comprising:
and rendering the three-dimensional model through the target fixed color map and the normal map.
4. The method of claim 3, wherein said rendering the three-dimensional model through the target inherent color map and the normal map comprises:
calculating the normal map to obtain normal information of the target object;
calculating the target fixed color map to obtain the environment reflected light information and the color information of the target object;
and rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
5. The method of claim 4, wherein RGBA channels of the target fixed color map comprise Alpha channels for carrying the two-dimensional shadow image;
the calculating the normal map to obtain the normal information of the target object includes:
sampling the normal map so as to guide the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model;
the calculating the target intrinsic color map to obtain the environmental reflected light information and the color information of the target object comprises the following steps:
sampling the target fixed color map to guide the environment reflection light information carried in the Alpha channel and the color information carried in the RGB channel into a UV expansion layer of the three-dimensional model;
rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information, including:
generating the three-dimensional model with a color effect and a light and shadow effect based on a UV expansion layer of the three-dimensional model.
6. The method of claim 5, wherein sampling the target inherent color map to direct the ambient reflected light information carried in the Alpha channel into a UV spreading layer of the three-dimensional model comprises:
sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of the target object;
and processing indirect diffuse reflection illumination and specular reflection illumination of the target object through a material environment blocking map, and introducing a processing result as the environment reflection light information into a UV expansion layer of the three-dimensional model.
7. The method according to claim 4, wherein the calculating the target inherent color map to obtain the color information of the target object comprises:
and simulating the color information carried in the RGB channel through a function, and introducing a function simulation value into a UV expansion layer of the three-dimensional model.
8. The method of claim 3, wherein the target inherent color map comprises a two-dimensional map carrying ambient reflected light information;
the rendering the three-dimensional model through the target inherent color map and the normal map comprises the following steps:
if the rendering effect based on physics is stopped being used in the running environment where the target object is located, sampling the target fixed color map so as to guide the color information carried in the RGB channel and the environment reflection light information carried in the two-dimensional map into a UV expansion layer of the three-dimensional model;
sampling the normal map so as to guide the normal vector information of the target object carried by the normal map into a UV expansion layer of the three-dimensional model;
generating the three-dimensional model with a color effect and a light and shadow effect based on a UV expansion layer of the three-dimensional model.
9. The method of claim 1, wherein the two-dimensional shadow image is a two-dimensional grayscale map that includes highlight region information of the target object.
10. The method of claim 1, wherein the two-dimensional shadowgraph image is stored in a reflection source of a reflection probe.
11. An apparatus for model rendering, the apparatus comprising:
the acquisition module is used for acquiring a hand-drawn solid color map, a normal map and a two-dimensional shadow image of the target object; the two-dimensional shadow image bears shadow information of the target object;
a merging module, configured to merge the two-dimensional light and shadow image with the hand-drawn solid color map to obtain a target solid color map of the target object;
and the output module is used for outputting the target fixed color map and the normal map so as to render the three-dimensional model of the target object.
12. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform a model rendering method as claimed in any one of claims 1 to 10.
13. A system comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the model rendering method of any of claims 1 to 10.
14. A computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by a processor to implement a model rendering method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010888002.4A CN112116692A (en) | 2020-08-28 | 2020-08-28 | Model rendering method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010888002.4A CN112116692A (en) | 2020-08-28 | 2020-08-28 | Model rendering method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112116692A true CN112116692A (en) | 2020-12-22 |
Family
ID=73804964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010888002.4A Pending CN112116692A (en) | 2020-08-28 | 2020-08-28 | Model rendering method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112116692A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581588A (en) * | 2020-12-23 | 2021-03-30 | 广东三维家信息科技有限公司 | Wallboard spray painting method and device and computer storage medium |
CN112634425A (en) * | 2020-12-30 | 2021-04-09 | 久瓴(江苏)数字智能科技有限公司 | Model rendering method and device, storage medium and computer equipment |
CN113034658A (en) * | 2021-03-30 | 2021-06-25 | 完美世界(北京)软件科技发展有限公司 | Method and device for generating model map |
CN113223133A (en) * | 2021-04-21 | 2021-08-06 | 深圳市腾讯网域计算机网络有限公司 | Three-dimensional model color changing method and device |
CN113240800A (en) * | 2021-05-31 | 2021-08-10 | 北京世冠金洋科技发展有限公司 | Three-dimensional temperature flow field thermodynamic diagram display method and device |
CN113362440A (en) * | 2021-06-29 | 2021-09-07 | 成都数字天空科技有限公司 | Material map obtaining method and device, electronic equipment and storage medium |
CN113793402A (en) * | 2021-08-10 | 2021-12-14 | 北京达佳互联信息技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
CN113822988A (en) * | 2021-09-24 | 2021-12-21 | 中关村科学城城市大脑股份有限公司 | Three-dimensional model baking method and system based on urban brain space-time construction component |
CN114119847A (en) * | 2021-12-05 | 2022-03-01 | 北京字跳网络技术有限公司 | Graph processing method and device, computer equipment and storage medium |
CN114327718A (en) * | 2021-12-27 | 2022-04-12 | 北京百度网讯科技有限公司 | Interface display method and device, equipment and medium |
WO2023098358A1 (en) * | 2021-12-05 | 2023-06-08 | 北京字跳网络技术有限公司 | Model rendering method and apparatus, computer device, and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020135603A1 (en) * | 1999-12-03 | 2002-09-26 | Jun Nakagawa | Image generation system and information storage medium |
CN1632755A (en) * | 2003-12-23 | 2005-06-29 | 大众电脑股份有限公司 | Method for testing overfrequency usage of video card and video card system |
CN102346921A (en) * | 2011-09-19 | 2012-02-08 | 广州市凡拓数码科技有限公司 | Renderer-baking light mapping method of three-dimensional software |
CN102945558A (en) * | 2012-10-17 | 2013-02-27 | 沈阳创达技术交易市场有限公司 | Optimizing method of high model rendering |
CN104778739A (en) * | 2015-03-27 | 2015-07-15 | 浙江慧谷信息技术有限公司 | Computer-based real-time sketch rendering algorithm |
CN105321200A (en) * | 2015-07-10 | 2016-02-10 | 苏州蜗牛数字科技股份有限公司 | Offline rendering preprocessing method |
CN105574917A (en) * | 2015-12-18 | 2016-05-11 | 成都君乾信息技术有限公司 | Normal map reconstruction processing system and method for 3D models |
CN106971418A (en) * | 2017-04-28 | 2017-07-21 | 碰海科技(北京)有限公司 | Hand-held household building materials convex-concave surface texture reconstructing device |
CN107749077A (en) * | 2017-11-08 | 2018-03-02 | 米哈游科技(上海)有限公司 | A kind of cartoon style shadows and lights method, apparatus, equipment and medium |
CN108304755A (en) * | 2017-03-08 | 2018-07-20 | 腾讯科技(深圳)有限公司 | The training method and device of neural network model for image procossing |
CN108564646A (en) * | 2018-03-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Rendering intent and device, storage medium, the electronic device of object |
CN108765550A (en) * | 2018-05-09 | 2018-11-06 | 华南理工大学 | A kind of three-dimensional facial reconstruction method based on single picture |
CN108986200A (en) * | 2018-07-13 | 2018-12-11 | 北京中清龙图网络技术有限公司 | The preprocess method and system of figure rendering |
CN109961500A (en) * | 2019-03-27 | 2019-07-02 | 网易(杭州)网络有限公司 | Rendering method, device, equipment and the readable storage medium storing program for executing of Subsurface Scattering effect |
CN110570510A (en) * | 2019-09-10 | 2019-12-13 | 珠海天燕科技有限公司 | Method and device for generating material map |
CN111563951A (en) * | 2020-05-12 | 2020-08-21 | 网易(杭州)网络有限公司 | Map generation method and device, electronic equipment and storage medium |
-
2020
- 2020-08-28 CN CN202010888002.4A patent/CN112116692A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020135603A1 (en) * | 1999-12-03 | 2002-09-26 | Jun Nakagawa | Image generation system and information storage medium |
CN1632755A (en) * | 2003-12-23 | 2005-06-29 | 大众电脑股份有限公司 | Method for testing overfrequency usage of video card and video card system |
CN102346921A (en) * | 2011-09-19 | 2012-02-08 | 广州市凡拓数码科技有限公司 | Renderer-baking light mapping method of three-dimensional software |
CN102945558A (en) * | 2012-10-17 | 2013-02-27 | 沈阳创达技术交易市场有限公司 | Optimizing method of high model rendering |
CN104778739A (en) * | 2015-03-27 | 2015-07-15 | 浙江慧谷信息技术有限公司 | Computer-based real-time sketch rendering algorithm |
CN105321200A (en) * | 2015-07-10 | 2016-02-10 | 苏州蜗牛数字科技股份有限公司 | Offline rendering preprocessing method |
CN105574917A (en) * | 2015-12-18 | 2016-05-11 | 成都君乾信息技术有限公司 | Normal map reconstruction processing system and method for 3D models |
CN108304755A (en) * | 2017-03-08 | 2018-07-20 | 腾讯科技(深圳)有限公司 | The training method and device of neural network model for image procossing |
CN106971418A (en) * | 2017-04-28 | 2017-07-21 | 碰海科技(北京)有限公司 | Hand-held household building materials convex-concave surface texture reconstructing device |
CN107749077A (en) * | 2017-11-08 | 2018-03-02 | 米哈游科技(上海)有限公司 | A kind of cartoon style shadows and lights method, apparatus, equipment and medium |
CN108564646A (en) * | 2018-03-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Rendering intent and device, storage medium, the electronic device of object |
CN108765550A (en) * | 2018-05-09 | 2018-11-06 | 华南理工大学 | A kind of three-dimensional facial reconstruction method based on single picture |
CN108986200A (en) * | 2018-07-13 | 2018-12-11 | 北京中清龙图网络技术有限公司 | The preprocess method and system of figure rendering |
CN109961500A (en) * | 2019-03-27 | 2019-07-02 | 网易(杭州)网络有限公司 | Rendering method, device, equipment and the readable storage medium storing program for executing of Subsurface Scattering effect |
CN110570510A (en) * | 2019-09-10 | 2019-12-13 | 珠海天燕科技有限公司 | Method and device for generating material map |
CN111563951A (en) * | 2020-05-12 | 2020-08-21 | 网易(杭州)网络有限公司 | Map generation method and device, electronic equipment and storage medium |
Non-Patent Citations (6)
Title |
---|
YING XIONG: "From pixels to physics: Probabilistic color de-rendering", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 26 July 2012 (2012-07-26) * |
张怡;张加万;孙济洲;柯永振;: "基于可编程图形加速硬件的实时光线投射算法", 系统仿真学报, no. 18 * |
张怡;张加万;孙济洲;柯永振;: "基于可编程图形加速硬件的实时光线投射算法", 系统仿真学报, no. 18, 20 September 2007 (2007-09-20) * |
梁骁;: "论贴图在三维动画中的重要作用", 美术教育研究, no. 24, 25 December 2017 (2017-12-25) * |
濮毅;吕明明;: "浅析次世代游戏模型技术特征和制作流程", 景德镇学院学报, no. 03, 15 June 2017 (2017-06-15) * |
谢征: "次世代游戏开发中角色模型与材质贴图技术实现研究", 《中国优秀硕士论文全文数据库》, 15 February 2019 (2019-02-15) * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581588A (en) * | 2020-12-23 | 2021-03-30 | 广东三维家信息科技有限公司 | Wallboard spray painting method and device and computer storage medium |
CN112634425B (en) * | 2020-12-30 | 2022-06-28 | 久瓴(江苏)数字智能科技有限公司 | Model rendering method and device, storage medium and computer equipment |
CN112634425A (en) * | 2020-12-30 | 2021-04-09 | 久瓴(江苏)数字智能科技有限公司 | Model rendering method and device, storage medium and computer equipment |
CN113034658A (en) * | 2021-03-30 | 2021-06-25 | 完美世界(北京)软件科技发展有限公司 | Method and device for generating model map |
CN113034658B (en) * | 2021-03-30 | 2022-10-04 | 完美世界(北京)软件科技发展有限公司 | Method and device for generating model map |
CN113223133A (en) * | 2021-04-21 | 2021-08-06 | 深圳市腾讯网域计算机网络有限公司 | Three-dimensional model color changing method and device |
CN113240800A (en) * | 2021-05-31 | 2021-08-10 | 北京世冠金洋科技发展有限公司 | Three-dimensional temperature flow field thermodynamic diagram display method and device |
CN113362440B (en) * | 2021-06-29 | 2023-05-26 | 成都数字天空科技有限公司 | Material map acquisition method and device, electronic equipment and storage medium |
CN113362440A (en) * | 2021-06-29 | 2021-09-07 | 成都数字天空科技有限公司 | Material map obtaining method and device, electronic equipment and storage medium |
CN113793402A (en) * | 2021-08-10 | 2021-12-14 | 北京达佳互联信息技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
CN113793402B (en) * | 2021-08-10 | 2023-12-26 | 北京达佳互联信息技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
CN113822988A (en) * | 2021-09-24 | 2021-12-21 | 中关村科学城城市大脑股份有限公司 | Three-dimensional model baking method and system based on urban brain space-time construction component |
CN114119847A (en) * | 2021-12-05 | 2022-03-01 | 北京字跳网络技术有限公司 | Graph processing method and device, computer equipment and storage medium |
WO2023098358A1 (en) * | 2021-12-05 | 2023-06-08 | 北京字跳网络技术有限公司 | Model rendering method and apparatus, computer device, and storage medium |
CN114119847B (en) * | 2021-12-05 | 2023-11-07 | 北京字跳网络技术有限公司 | Graphic processing method, device, computer equipment and storage medium |
CN114327718A (en) * | 2021-12-27 | 2022-04-12 | 北京百度网讯科技有限公司 | Interface display method and device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112116692A (en) | Model rendering method, device and equipment | |
CN111009026B (en) | Object rendering method and device, storage medium and electronic device | |
US11257286B2 (en) | Method for rendering of simulating illumination and terminal | |
CN112316420B (en) | Model rendering method, device, equipment and storage medium | |
CN112215934A (en) | Rendering method and device of game model, storage medium and electronic device | |
US11711563B2 (en) | Methods and systems for graphics rendering assistance by a multi-access server | |
CN109364481B (en) | Method, device, medium and electronic equipment for real-time global illumination in game | |
CN113012273B (en) | Illumination rendering method, device, medium and equipment based on target model | |
WO2021249091A1 (en) | Image processing method and apparatus, computer storage medium, and electronic device | |
US9183654B2 (en) | Live editing and integrated control of image-based lighting of 3D models | |
CN111476851A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US20230125255A1 (en) | Image-based lighting effect processing method and apparatus, and device, and storage medium | |
US20230368459A1 (en) | Systems and methods for rendering virtual objects using editable light-source parameter estimation | |
US20240087219A1 (en) | Method and apparatus for generating lighting image, device, and medium | |
CN112446943A (en) | Image rendering method and device and computer readable storage medium | |
KR20040024550A (en) | Painting method | |
CN113648652B (en) | Object rendering method and device, storage medium and electronic equipment | |
US11804008B2 (en) | Systems and methods of texture super sampling for low-rate shading | |
CN111784814A (en) | Virtual character skin adjusting method and device | |
US10754498B2 (en) | Hybrid image rendering system | |
CN116363288A (en) | Rendering method and device of target object, storage medium and computer equipment | |
CN115631289A (en) | Vehicle model surface generation method, system, equipment and storage medium | |
CN114820904A (en) | Illumination-supporting pseudo-indoor rendering method, apparatus, medium, and device | |
KR20180108184A (en) | Real-time rendering method for full lighting for mobile | |
CN115035231A (en) | Shadow baking method, shadow baking device, electronic apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |