CN112734896B - Environment shielding rendering method and device, storage medium and electronic equipment - Google Patents

Environment shielding rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112734896B
CN112734896B CN202110024576.1A CN202110024576A CN112734896B CN 112734896 B CN112734896 B CN 112734896B CN 202110024576 A CN202110024576 A CN 202110024576A CN 112734896 B CN112734896 B CN 112734896B
Authority
CN
China
Prior art keywords
rendering
projection
model
value
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110024576.1A
Other languages
Chinese (zh)
Other versions
CN112734896A (en
Inventor
吴黎辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110024576.1A priority Critical patent/CN112734896B/en
Publication of CN112734896A publication Critical patent/CN112734896A/en
Application granted granted Critical
Publication of CN112734896B publication Critical patent/CN112734896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure relates to the field of image processing, and in particular, to an environmental mask rendering method, an environmental mask rendering device, a computer-readable storage medium, and an electronic apparatus. The environment shielding rendering method comprises the steps of determining pixel points of a target object in a projection space to be marked as sampling points; projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points; and determining an environment shading rendering value of the sampling point, and rendering an environment shading image corresponding to the target object according to the environment shading rendering value and the texture map. The environment shielding rendering method disclosed by the invention can simulate the environment shielding effect and reduce the CPU overhead.

Description

Environment shielding rendering method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an environmental mask rendering method, an environmental mask rendering device, a computer-readable storage medium, and an electronic apparatus.
Background
In real-time rendering applications, the contrast of light and shade is lost under the foot when the character is indoors or inside a large area shadow. For PCs or game hosts, the environmental masking effect of screen space is generally employed to solve this problem, but in mobile platforms, a more lightweight solution is needed due to performance and heat limitations.
In the prior art, two methods are generally employed to solve this problem:
One is to use a patch, by creating a patch under the foot of the character, using a circular texture to draw with an alpha blend. However, when the ground has a slope or uneven, the method may cause an incorrect shielding relation, or soft particles are adopted to be softly inserted into the hard edge of the ground, but the shadow effect is poor.
The other is to draw shadows by using a projector, firstly, cut out models in the projector through the view body of the projector, and then render the projection matrix of the models into the projector again to obtain sampled uv. However, this method requires a CPU to make a view clipping, consumes CPU performance, and in addition, the projected model needs to be rendered again, which causes a relatively large CPU overhead if the number of model surfaces is large.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide an environmental shading rendering method, an environmental shading rendering device, a computer-readable storage medium, and an electronic apparatus, aiming at reducing the overhead of a CPU while simulating an environmental shading effect.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to an aspect of the embodiments of the present disclosure, there is provided an environment shading rendering method, including: determining pixel points of a target object in a projection space to be marked as sampling points; projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points; and determining an environment shading rendering value of the sampling point, and rendering an environment shading image corresponding to the target object according to the environment shading rendering value and the texture map.
According to some embodiments of the disclosure, based on the foregoing solution, the determining the pixel point of the target object in the projection space to be marked as a sampling point includes: acquiring a rendering model corresponding to a target object in a camera coordinate space; calculating a view body model according to the rendering model; and rendering the view body model and performing depth test to mark pixel points in the projection space of the view body model as the sampling points.
According to some embodiments of the disclosure, based on the foregoing solution, the rendering the view volume model and performing a depth test to mark a pixel point in a projection space of the view volume model as the sampling point includes: rendering a first surface of the view body model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value; rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value; and marking pixel points in the projection space of the view body model as the sampling points according to the second template numerical value.
According to some embodiments of the disclosure, based on the foregoing, the first face of the view volume model includes a face facing the camera or a face remote from the camera.
According to some embodiments of the disclosure, based on the foregoing scheme, the method further comprises: calculating world coordinates of the sampling points, comprising: rendering the view volume model to obtain a depth value of the sampling point; and calculating world coordinates of the sampling points according to the depth values of the sampling points.
According to some embodiments of the disclosure, based on the foregoing scheme, the projecting the sampling point into a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling point, includes: projecting the sampling points to a perspective space through perspective transformation by utilizing a projection assembly so as to obtain a perspective matrix; calculating a projection map according to the perspective matrix and world coordinates of the sampling points; and sampling shadow textures, and generating the texture map from the projection map according to the shadow textures.
According to some embodiments of the disclosure, based on the foregoing scheme, the determining the environmental mask rendering value of the sampling point includes: calculating the height difference of the sampling point according to the projection position coordinates and the world coordinates of the sampling point; setting an environment shielding gradual change distance; and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
According to a second aspect of the embodiments of the present disclosure, there is provided an environmental shading rendering device, including: the marking module is used for determining pixel points of the target object in the projection space to mark as sampling points; the projection module is used for projecting the sampling points to a perspective space through perspective transformation and determining texture mapping based on world coordinates of the sampling points; and the drawing module is used for determining an environment shading rendering value of the sampling point and rendering an environment shading image corresponding to the target object according to the environment shading rendering value and the texture map.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an ambient shading rendering method as in the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the ambient occlusion rendering method as in the above embodiments.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
In some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation, a texture map is calculated, and finally an environment masking image of the target object is mapped according to the texture map. On one hand, the pixel points in the projection space are marked as sampling points, so that the object view can be prevented from being cut once, the environmental shielding effect is ensured, and meanwhile, the CPU calculation is omitted, so that the method can be applied to the limitation of reducing performance and heating of a mobile terminal; on the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and some calculation for simulating the environment shielding is performed, so that all projection models are prevented from being rendered, and good environment shielding effects are achieved when the number of the model surfaces is large or complex.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 schematically illustrates a schematic diagram of an ambient occlusion rendering method using a tile in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of an ambient occlusion rendering method in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic view of a view volume model in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of sampling points within a view volume model projection space in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a composition diagram of an environmental occlusion rendering device in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure;
Fig. 7 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In real-time rendering, character underfoot shadows can be addressed by environmental masking effects. FIG. 1 schematically illustrates a schematic diagram of an ambient occlusion rendering method using a tile in an exemplary embodiment of the present disclosure, as shown in FIG. 1, by creating a tile under the foot of a character, drawing using a circular texture with an alpha blend. However, when the ground has a slope or uneven, the method may cause an incorrect shielding relation, or soft particles are adopted to be softly inserted into the hard edge of the ground, but the shadow effect is poor.
The other is to draw shadows by using a projector, firstly, a model in the projector is cut out by a view body of the projector, then the models are transferred into a projection matrix of the projector to be rendered again to obtain a sampled uv, but the method requires a CPU to do view body cutting once, consumes CPU performance, and in addition, the projected model needs to be rendered again, if the number of the model surfaces is huge, the cost of the CPU is relatively large.
In view of the problems in the related art, the present disclosure provides an environmental shading rendering method, which aims to provide a lighter solution suitable for mobile terminals with limited performance and heat generation. Implementation details of the technical solutions of the embodiments of the present disclosure are set forth in detail below.
Fig. 2 schematically illustrates a flowchart of an environment shading rendering method according to an exemplary embodiment of the present disclosure, as illustrated in fig. 2, the environment shading rendering method includes steps S1 to S3:
s1, determining pixel points of a target object in a projection space to be marked as sampling points;
S2, projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points;
and S3, determining an environment shading rendering value of the sampling point, and rendering an environment shading image corresponding to the target object according to the environment shading rendering value and the texture map.
In some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation, a texture map is calculated, and finally an environment masking image of the target object is mapped according to the texture map. On the one hand, the pixel points in the projection space are marked as sampling points, so that the object view can be prevented from being cut once, the environment shielding effect is ensured, and meanwhile, the CPU calculation is omitted, so that the method can be applied to the limitation of reducing performance and heating of a mobile terminal. On the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and some calculation for simulating the environment shielding is performed, so that all projection models are prevented from being rendered, and good environment shielding effects are achieved when the number of the model surfaces is large or complex.
Hereinafter, each step of the environment masking rendering method in the present exemplary embodiment will be described in more detail with reference to the accompanying drawings and examples.
In step S1, a pixel point of the target object located in the projection space is determined to be a sampling point.
In one embodiment of the present disclosure, the determining the pixel point of the target object in the projection space to be marked as the sampling point includes:
step S11: and acquiring a rendering model corresponding to the target object in the camera coordinate space.
Specifically, the target object is an image that needs to draw an environmental occlusion (AO) effect, for example, when drawing a shadow under the foot, the target object is both feet of the character. The target object is located in the world coordinate space, the world coordinate system is the absolute coordinate system of the system, the position is fixed, and the target object cannot be presented on the screen at the moment.
And converting the target object in the world coordinate space into the camera coordinate space through a coordinate system to obtain the target object in the camera coordinate space. The camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as a Z axis.
And after the conversion to the camera coordinate space, a rendering model corresponding to the target object can be obtained. The camera performs conventional rendering to obtain a rendering model, wherein the rendering model comprises: position coordinates of the target object in the camera coordinate space and depth information. Position coordinates, namely coordinates in a camera coordinate space, obtained after the target object is converted into the camera coordinate space; the depth information may be obtained from a depth buffer, which functions to distinguish the level of the color, prevent the blocked color from being displayed, and store the depth values of each pixel point in the rendering model from the image collector (camera) to each pixel point in the scene.
There are various prior art techniques for converting a world coordinate system to a camera coordinate system and obtaining a rendering model in the camera coordinate space to obtain position coordinates and depth information, which are not described in detail herein.
Step S12: and calculating a view model according to the rendering model.
In particular, the position and orientation of the camera is defined in terms of a camera coordinate space, but the field of view of the camera is not infinite, for which a view volume must be formulated, objects within the view volume (i.e. within the projection space) will be projected onto the view plane, and objects not within the view volume will be discarded.
Fig. 3 schematically illustrates a schematic view of a view volume model in an exemplary embodiment of the present disclosure. Three-dimensional graphics typically employ perspective projection, for which, as shown in fig. 3, the view model is a quadrangular frustum, 301 is a virtual camera in a camera coordinate space, that is, a projection center, 302 is a Near clipping plane (Near) of the view model, and 303 is a Far clipping plane (Far) of the view model.
It should be noted that, the view Volume model needs to be calculated according to a rendering model, and a bounding box is projected by a rendering model corresponding to the rendering target object through a Shadow Volume (Shadow Volume) technology as the view Volume, so that the view Volume can enclose the rendering model corresponding to the target object.
Step S13: and rendering the view body model and performing depth test to mark pixel points in the projection space of the view body model as the sampling points.
In one embodiment of the present disclosure, marking pixels that are within the view Volume model projection space may employ the Shadow Volume algorithm. The rendering the view volume model and performing a depth test to mark a pixel point in the projection space of the view volume model as the sampling point may include the steps of:
Step S131, rendering a first surface of the view volume model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value;
Step S132, rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value;
And step S133, marking the pixel points in the projection space of the view body model as the sampling points according to the second template numerical value.
The depth test is to compare the depth of the current pixel point with the depth value of the pixel point in the corresponding depth buffer. In the method, a template buffer zone is set to calculate the depth test result, so that pixel points in the projection space of the view body model are marked.
In one embodiment of the present disclosure, the first face of the view volume model may be the front face of the view volume model, i.e. the face facing the camera; the second surface of the view volume model may be a reverse surface of the view volume model, i.e. a surface far from the camera, and the step S13 may specifically include the steps of:
Step S131, a first surface of the view volume model is rendered to obtain a first rendering model, the first rendering model is subjected to depth test, and a first template value is calculated.
Specifically, the front face of the view volume model is first rendered and depth testing is performed: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, the template buffer area is increased, and the template value is increased by 1; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, and the template value is unchanged. And carrying out depth test on each pixel point on the rendering surface, and sequentially obtaining a first template value of each pixel point, wherein the color does not need to be output.
And step S132, rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value.
Specifically, rendering the reverse side of the view volume model and performing depth testing: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is decremented, and the template value is reduced by 1. And performing depth test on each pixel point on the rendering surface, and calculating a template value based on the first template value to obtain a second template value, wherein the color does not need to be output.
And step S133, marking the pixel points in the projection space of the view body model as the sampling points according to the second template numerical value.
And if the second template value is greater than 0, the second template value is pixel points in the projection space, the pixel points are marked as sampling points, and at the moment, colors can be output for watching marked details.
Fig. 4 schematically illustrates a schematic diagram of sampling points in a projection space of a view volume model in an exemplary embodiment of the present disclosure, as shown in fig. 4, for example, an environmental shielding effect under a foot needs to be drawn, where 401 is a virtual camera, 402 is a near clipping plane, 403 is a marking plane, a sampling point in the view volume can be marked, that is, a ground plane needing to draw a shadow, and 404 is a far clipping plane.
In one embodiment of the present disclosure, the first face of the view volume model may be the opposite face of the view volume model, i.e., the face remote from the camera; the second surface of the view volume model may be a front surface of the view volume model, i.e. a surface facing the camera, and step S13 may specifically comprise the steps of:
Step S131, a first surface of the view volume model is rendered to obtain a first rendering model, the first rendering model is subjected to depth test, and a first template value is calculated.
Specifically, firstly, rendering the back surface of the view volume model and performing depth test: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is increased, and the template value is increased by 1. And carrying out depth test on each pixel point on the rendering surface, and sequentially obtaining a first template value of each pixel point, wherein the color does not need to be output.
And step S132, rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value.
Specifically, the front side of the view volume model is rendered and depth testing is performed: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is decremented, and the template value is reduced by 1. And performing depth test on each pixel point on the rendering surface, and calculating a template value based on the first template value to obtain a second template value, wherein the color does not need to be output.
And step S133, marking the pixel points in the projection space of the view body model according to the second template numerical value as the sampling points, wherein the marked sampling points are in a 403 marking plane as shown in fig. 4.
And if the second template value is greater than 0, the second template value is pixel points in the projection space, the pixel points are marked as sampling points, and the color is output.
The pixel points in the projection space are marked by the Shadow Volume algorithm, so that the CPU can be prevented from doing one-time view clipping, and the consumption of the CPU is reduced.
In step S2, the sampling points are projected into a perspective space through perspective transformation, and a texture map is calculated based on world coordinates of the sampling points.
Step S20: and calculating world coordinates of the sampling points.
In one embodiment of the present disclosure, calculating world coordinates of the sampling points includes: rendering the view volume model to obtain a depth value of the sampling point; and calculating world coordinates of the sampling points according to the depth values of the sampling points.
It should be noted that only the depth value of the sampling point is obtained. Setting the test state of the template buffer zone, namely testing the depth value of the sampling point with the template value larger than 0, rendering the front surface of the view body model, and acquiring the depth value of the sampling point according to the depth information in the step S11.
After the depth value of the sampling point is obtained, world coordinates of the sampling point are calculated based on the depth value of the sampling point. In the prior art, there are many methods for calculating world coordinates based on depth values, and the disclosure is not limited herein.
For example, depth Buffer (Depth Buffer) may be employed to construct world coordinates. The method specifically comprises the steps of converting the depth value into a linear depth value in a perspective view, and calculating world coordinates of the sampling point according to the linear depth value.
Step S21: and projecting the sampling points to a perspective space through perspective transformation by utilizing a projection component so as to acquire a perspective matrix.
In one embodiment of the present disclosure, the projection component may be world to Projector in a Unity Projector, and the projection component may be used to project the sampling point into a perspective space, that is, a two-dimensional coordinate space, for presenting the target object and the corresponding environmental masking effect.
The sampling points are projected from the camera coordinate space to the perspective space by adopting perspective transformation, and the corresponding transformation matrix is a perspective matrix.
Step S22: and calculating a projection map according to the perspective matrix and the world coordinates of the sampling points.
Specifically, the world coordinates of the sampling points are transmitted into a perspective matrix corresponding to world to projector, and the world coordinates of the sampling points are converted into perspective coordinates in a corresponding perspective space through the change of the perspective matrix, so that the perspective coordinates are used as the coordinates of a projection map required by the sampling texture.
Step S23: and sampling shadow textures, and generating the texture map from the projection map according to the shadow textures.
Wherein the shadow texture is used to simulate an Ambient Occlusion (AO), which can be preset in advance, e.g. circular or elliptical, etc. The shadow structure of the shadow texture is sampled and then a texture map is generated from the projection map.
When the texture mapping is calculated in the step S2, the depth of the sampling point is obtained by rendering the view model and then the subsequent calculation is carried out, so that only the sampling point in the view is calculated when the model is rendered, namely 8 vertexes of a near cutting plane and a far cutting plane are calculated, and 12 triangles formed by the virtual camera, the near cutting plane, the far cutting plane and a marking plane are calculated. In addition, when the number of model planes is huge and complex, the method provided by the present disclosure still has lower CPU overhead.
In step S3, an environmental mask rendering value of the sampling point is determined, and an environmental mask image corresponding to the target object is rendered according to the environmental mask rendering value and the texture map.
In one embodiment of the present disclosure, step S3 specifically includes the following:
Step S31: calculating the height difference of the sampling point according to the projection position coordinates and the world coordinates of the sampling point;
Specifically, the height difference is calculated according to projectorPos in the projection assembly, and the calculation method is as follows:
HeightOffset=projectorPos.y-worldPos.y (1)
Wherein HeightOffset is the height difference, projectorPos.y is the y-axis coordinate of the projection position coordinate of the sampling point in the world coordinate space, and world Pos.y is the y-axis coordinate of the world coordinate of the sampling point in the world coordinate space.
Step S32: setting an environment shielding gradual change distance;
specifically, the environmental mask gradient distance represents a distance corresponding to the complete disappearance of the environmental mask of the target object, and may be set according to the need. Taking the example of drawing the under-foot environmental shielding effect, the environmental shielding gradient distance indicates how far the foot is, and the environmental shielding (AO) completely disappears.
Step S33: and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
Specifically, the environmental masking rendering value calculation method is as follows:
AOfade=pow2(1-saturate(HeightOffset/fadeDistance)) (2)
Where AO fade is the ambient occlusion rendering value, heightOffset is the height difference, FADEDISTANCE is the ambient occlusion fade distance, pow, saturation are built-in functions of Unity loader in the projection component.
Step S34: and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
Specifically, a graphic is rendered on a rasterized two-dimensional screen according to the environmental shading rendering value and the texture map by utilizing AlphaBlend functions in a pixel shader (pixel shader) of a projection component so as to draw an environmental shading image of the target object, wherein the environmental shading image is positioned in a two-dimensional screen space and is displayed at a terminal.
It should be noted that, the execution order of step S31 and step S32 is not limited, and step S31 may be performed first to calculate the height difference of the sampling point, or step S32 may be performed first to set the environmental shielding gradient distance.
In one embodiment of the present disclosure, an ambient occlusion rendering method includes the steps of: performing depth sampling according to SAMPLEDEPTH functions to obtain a depth value of a sampling point; constructing world coordinates based on depth values of the sampling points to obtain worldPos of the sampling points; the projection matrix of worldToProjector which is transmitted into the projector is used for calculating a map uv, namely a projection map; sampling the shadow texture to obtain a shadow map shadow; calculating AO according to the height difference and the environment shielding gradual change distance, namely, an environment shielding rendering value; and adopting alpha blend to perform image drawing so as to render an environment shielding image corresponding to the target object.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation, a texture map is calculated, and finally an environment masking image of the target object is mapped according to the texture map. On the one hand, the pixel points in the projection space are marked as sampling points, so that the object view can be prevented from being cut once, the environment shielding effect is ensured, and meanwhile, the CPU calculation is omitted, so that the method can be applied to the limitation of reducing performance and heating of a mobile terminal. On the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and some calculation for simulating the environment shielding is performed, so that all projection models are prevented from being rendered, and good environment shielding effects are achieved when the number of the model surfaces is large or complex.
Fig. 5 schematically illustrates a composition diagram of an environment shading rendering device in an exemplary embodiment of the present disclosure, and as shown in fig. 5, the environment shading rendering device 500 includes a marking module 501, a projection module 502, and a drawing module 503. Wherein:
A marking module 501, configured to determine a pixel point of the target object in the projection space to mark as a sampling point;
The projection module 502 is configured to project the sampling point to a perspective space through perspective transformation, and determine a texture map based on world coordinates of the sampling point;
And a drawing module 503, configured to determine an environmental mask rendering value of the sampling point, and render an environmental mask image corresponding to the target object according to the environmental mask rendering value and the texture map.
According to an exemplary embodiment of the present disclosure, the marking module 501 includes: a rendering unit, a view body unit and a testing unit (not shown in the figure), wherein the rendering unit is used for acquiring a rendering model corresponding to a target object in a camera coordinate space; the view body unit is used for calculating a view body model according to the rendering model; the test unit is used for rendering the view body model and performing depth test to mark pixel points in the projection space of the view body model as the sampling points.
According to an exemplary embodiment of the disclosure, the test unit is configured to render a first surface of the view volume model to obtain a first rendering model, perform a depth test on the first rendering model, and calculate a first template value; rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value; and marking pixel points in the projection space of the view body model as the sampling points according to the second template numerical value.
According to an exemplary embodiment of the present disclosure, the projection module 502 further includes a world coordinate unit (not shown in the drawing) for calculating world coordinates of the sampling points, including: rendering the view volume model to obtain a depth value of the sampling point; and calculating world coordinates of the sampling points according to the depth values of the sampling points.
According to an exemplary embodiment of the present disclosure, the projection module 502 includes a projection unit, a projection mapping unit, and a texture mapping unit (not shown in the figure), where the projection unit is configured to project the sampling points to a perspective space through perspective transformation by using a projection component to obtain a perspective matrix; the projection mapping unit is used for calculating a projection mapping according to the perspective matrix and world coordinates of the sampling points; the texture mapping unit is used for sampling shadow textures and generating the texture mapping from the projection mapping according to the shadow textures.
According to an exemplary embodiment of the present disclosure, the drawing module 503 includes a parameter unit for calculating a height difference of the sampling point according to a projection position coordinate and a world coordinate of the sampling point, and a calculating unit (not shown in the figure); setting an environment shielding gradual change distance; the calculating unit is used for calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
According to an exemplary embodiment of the present disclosure, the first face of the view volume model comprises a face facing the camera or a face remote from the camera.
The specific details of each module in the above-mentioned environment shielding rendering device 500 are described in detail in the corresponding environment shielding rendering method, so that the details are not repeated here.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, a storage medium capable of implementing the above method is also provided. Fig. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure, as shown in fig. 6, depicting a program product 600 for implementing the above-described method according to an embodiment of the present disclosure, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a cell phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 7 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
It should be noted that, the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 7, the computer system 700 includes a central processing unit (Central Processing Unit, CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a random access Memory (Random Access Memory, RAM) 703. In the RAM 703, various programs and data required for the system operation are also stored. The CPU 701, ROM702, and RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. When executed by a Central Processing Unit (CPU) 701, performs the various functions defined in the system of the present disclosure.
It should be noted that, the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present disclosure also provides a computer-readable medium that may be contained in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An environmental masking rendering method, comprising:
determining pixel points of a target object in a projection space to be marked as sampling points; the projection space is a view body taking a virtual camera in the camera coordinate space as a projection center;
Projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points;
Determining an environment shielding rendering value of the sampling point, and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map; wherein the determining the environmental masking rendering value of the sampling point includes:
Calculating the height difference of the sampling point according to the projection position coordinates and the world coordinates of the sampling point; and
Setting an environment shielding gradual change distance; the gradual change distance of the environmental shielding is the distance of the target object from the disappearance of the environmental shielding;
and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
2. The method of claim 1, wherein determining the pixel point of the target object in the projection space to be marked as the sampling point comprises:
acquiring a rendering model corresponding to a target object in a camera coordinate space;
Calculating a view body model according to the rendering model;
And rendering the view body model and performing depth test to mark pixel points in the projection space of the view body model as the sampling points.
3. The method of claim 2, wherein the rendering the view volume model and performing a depth test to mark pixels in the projection space of the view volume model as the sampling points comprises:
rendering a first surface of the view body model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value;
Rendering a second surface of the view volume model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value;
and marking pixel points in the projection space of the view body model as the sampling points according to the second template numerical value.
4. A method of ambient occlusion rendering according to claim 3, wherein the first face of the view volume model comprises a face facing the camera or a face remote from the camera.
5. The ambient occlusion rendering method of claim 2, further comprising: calculating world coordinates of the sampling points, comprising:
Rendering the view volume model to obtain a depth value of the sampling point;
And calculating world coordinates of the sampling points according to the depth values of the sampling points.
6. The ambient occlusion rendering method of claim 1, wherein projecting the sample points into perspective space through perspective transformation and determining texture maps based on world coordinates of the sample points comprises:
Projecting the sampling points to a perspective space through perspective transformation by utilizing a projection assembly so as to obtain a perspective matrix;
Calculating a projection map according to the perspective matrix and world coordinates of the sampling points;
And sampling shadow textures, and generating the texture map from the projection map according to the shadow textures.
7. The ambient occlusion rendering method of claim 1, wherein said determining ambient occlusion rendering values for said sampling points comprises:
AOfade=pow2(1-saturate(HeightOffset/fadeDistance));
Where AO fade is the ambient occlusion rendering value, heightOffset is the height difference, FADEDISTANCE is the ambient occlusion fade distance, and pow and saturate are built-in functions in the projection assembly.
8. An environmental masking rendering device, comprising:
the marking module is used for determining pixel points of the target object in the projection space to mark as sampling points; the projection space is a view body taking a virtual camera in the camera coordinate space as a projection center;
the projection module is used for projecting the sampling points to a perspective space through perspective transformation and determining texture mapping based on world coordinates of the sampling points;
The drawing module is used for determining an environment shading rendering value of the sampling point and rendering an environment shading image corresponding to the target object according to the environment shading rendering value and the texture map; wherein the determining the environmental masking rendering value of the sampling point includes: calculating the height difference of the sampling point according to the projection position coordinates and the world coordinates of the sampling point; setting an environment shielding gradual change distance; the gradual change distance of the environmental shielding is the distance of the target object from the disappearance of the environmental shielding; and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
9. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the ambient occlusion rendering method of any of claims 1 to 7.
10. An electronic device, comprising:
One or more processors;
Storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the ambient occlusion rendering method of any of claims 1 to 7.
CN202110024576.1A 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment Active CN112734896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110024576.1A CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110024576.1A CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112734896A CN112734896A (en) 2021-04-30
CN112734896B true CN112734896B (en) 2024-04-26

Family

ID=75591413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110024576.1A Active CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112734896B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782613A (en) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 Image rendering method, device and equipment and storage medium
CN115063517A (en) * 2022-06-07 2022-09-16 网易(杭州)网络有限公司 Flash effect rendering method and device in game, storage medium and electronic equipment
CN116051713B (en) * 2022-08-04 2023-10-31 荣耀终端有限公司 Rendering method, electronic device, and computer-readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347419A2 (en) * 2002-03-21 2003-09-24 Microsoft Corporation Graphics image rendering with radiance self-transfer for low-frequency lighting environments
CN102592305A (en) * 2011-09-06 2012-07-18 浙江大学 Self-adaptive screen space ambient occlusion method
CN103345771A (en) * 2013-06-28 2013-10-09 中国科学技术大学 Efficient image rendering method based on modeling
CN104134230A (en) * 2014-01-22 2014-11-05 腾讯科技(深圳)有限公司 Image processing method, image processing device and computer equipment
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo
WO2017206325A1 (en) * 2016-05-30 2017-12-07 网易(杭州)网络有限公司 Calculation method and apparatus for global illumination
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107730578A (en) * 2017-10-18 2018-02-23 广州爱九游信息技术有限公司 The rendering intent of luminous environment masking figure, the method and apparatus for generating design sketch
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods
KR20180138458A (en) * 2017-06-21 2018-12-31 에스케이텔레콤 주식회사 Method for processing 3-d data
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569885B2 (en) * 2014-01-02 2017-02-14 Nvidia Corporation Technique for pre-computing ambient obscurance

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347419A2 (en) * 2002-03-21 2003-09-24 Microsoft Corporation Graphics image rendering with radiance self-transfer for low-frequency lighting environments
CN102592305A (en) * 2011-09-06 2012-07-18 浙江大学 Self-adaptive screen space ambient occlusion method
CN103345771A (en) * 2013-06-28 2013-10-09 中国科学技术大学 Efficient image rendering method based on modeling
CN104134230A (en) * 2014-01-22 2014-11-05 腾讯科技(深圳)有限公司 Image processing method, image processing device and computer equipment
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
WO2017206325A1 (en) * 2016-05-30 2017-12-07 网易(杭州)网络有限公司 Calculation method and apparatus for global illumination
KR20180138458A (en) * 2017-06-21 2018-12-31 에스케이텔레콤 주식회사 Method for processing 3-d data
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo
CN107730578A (en) * 2017-10-18 2018-02-23 广州爱九游信息技术有限公司 The rendering intent of luminous environment masking figure, the method and apparatus for generating design sketch
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
S. Herholz et al..Screen Space Spherical Harmonic Occlusion.《Vision,Modeling,and Visualization》.2012,第71-78页. *
一种改进的屏幕空间环境光遮蔽(SSAO)算法;杨志成;;现代计算机(专业版)(第08期);第41-44+65页 *
蛋白质表面模型渲染中的屏幕空间环境光遮蔽算法研究;赵兴旺;《中国优秀硕士学位论文全文数据库(电子期刊)基础科学辑》;全文 *

Also Published As

Publication number Publication date
CN112734896A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
CN108257204B (en) Vertex color drawing baking method and system applied to Unity engine
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
US20220215618A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN111882631B (en) Model rendering method, device, equipment and storage medium
CN109544674B (en) Method and device for realizing volume light
RU2422902C2 (en) Two-dimensional/three-dimensional combined display
CN111915712B (en) Illumination rendering method and device, computer readable medium and electronic equipment
US6791563B2 (en) System, method and computer program product for global rendering
CN111798554A (en) Rendering parameter determination method, device, equipment and storage medium
US8314797B1 (en) Method and apparatus for irradiance computation in 3-D computer graphics
CN112891946A (en) Game scene generation method and device, readable storage medium and electronic equipment
CN112529097A (en) Sample image generation method and device and electronic equipment
CN109544671B (en) Projection mapping method of video in three-dimensional scene based on screen space
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN116797701A (en) Diffusion effect rendering method and device, storage medium and electronic equipment
CN107452046B (en) Texture processing method, device and equipment of three-dimensional city model and readable medium
CN114832375A (en) Ambient light shielding processing method, device and equipment
CN112465692A (en) Image processing method, device, equipment and storage medium
CN117745915B (en) Model rendering method, device, equipment and storage medium
CN116474363A (en) Scene model rendering method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant