CN112541960A - Three-dimensional scene rendering method and device and electronic equipment - Google Patents

Three-dimensional scene rendering method and device and electronic equipment Download PDF

Info

Publication number
CN112541960A
CN112541960A CN201910888032.2A CN201910888032A CN112541960A CN 112541960 A CN112541960 A CN 112541960A CN 201910888032 A CN201910888032 A CN 201910888032A CN 112541960 A CN112541960 A CN 112541960A
Authority
CN
China
Prior art keywords
dimensional scene
rendering
area
data
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910888032.2A
Other languages
Chinese (zh)
Inventor
杜靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910888032.2A priority Critical patent/CN112541960A/en
Priority to PCT/CN2020/115761 priority patent/WO2021052392A1/en
Publication of CN112541960A publication Critical patent/CN112541960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The invention relates to a three-dimensional scene rendering method and device and electronic equipment. The method comprises the following steps: acquiring an area in a three-dimensional scene, which is shielded by an upper layer of graph; setting a value of setting data of the three-dimensional scene corresponding to the region as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering the three-dimensional scene; and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.

Description

Three-dimensional scene rendering method and device and electronic equipment
Technical Field
The present invention relates to the field of three-dimensional scene display, and more particularly, to a three-dimensional scene rendering method, a three-dimensional scene rendering apparatus, an electronic device, and a computer-readable storage medium.
Background
The construction display of a three-dimensional scene can be generally divided into two phases, modeling and rendering. Modeling is to construct vivid objects and scenes by using elements such as points, lines, surfaces, maps, materials and the like, and is the basis for constructing three-dimensional scenes. Rendering is to calculate and display the visual picture of the model under the factors of viewpoint, light, motion trail and the like.
The rendering process of the three-dimensional scene causes large performance consumption to components such as a CPU, a GPU and a memory, so that large performance pressure is brought to equipment. Therefore, how to optimize the rendering process of the three-dimensional scene and reduce the consumption of the device performance becomes a problem to be solved.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a new technical solution for rendering a three-dimensional scene.
According to a first aspect of the present invention, there is provided a method for rendering a three-dimensional scene, comprising:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
setting a value of setting data of the three-dimensional scene corresponding to the region as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering the three-dimensional scene;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
Optionally, the setting a value of setting data of the three-dimensional scene corresponding to the area as a target value includes:
acquiring pixels corresponding to the region;
searching the numerical value corresponding to each pixel in a buffer area of the three-dimensional scene for storing the set data;
and modifying each searched numerical value into the target value.
Optionally, the setting data is depth data in a depth buffer, and the target value is such that the region cannot pass a depth test in rendering display.
Optionally, the condition that the depth value to be tested is smaller than the current depth value in the depth buffer area when the test pass condition of the depth test is that the value of the setting data corresponding to the area of the three-dimensional scene is set as the target value includes:
and setting the numerical value of the set data of the three-dimensional scene corresponding to the area to be equal to the depth value corresponding to the near cutting surface.
Optionally, the setting data is stencil data in a stencil buffer, and the target value is such that the area cannot pass a stencil test in the rendering display.
Optionally, the acquiring a region of the three-dimensional scene that is occluded by an upper layer of graphics includes:
and obtaining an area shielded by the upper layer graph in the three-dimensional scene according to the RGBA information of the pixels in the upper layer graph.
Optionally, the RGBA information of the pixels in the upper layer graphic includes opacity of the pixels in the upper layer graphic.
According to the second aspect of the present invention, there is also provided a rendering method of a three-dimensional scene, including:
acquiring an area in the three-dimensional scene, which is shielded by an interface graph;
setting a value of setting data of the three-dimensional scene corresponding to the region as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering the three-dimensional scene;
rendering the three-dimensional scene to a screen and rendering the interface graph to the screen according to the set data of the set three-dimensional scene.
Optionally, the method further comprises:
drawing the interface graph to a rendering target buffer area;
the rendering the interface graphics to a screen includes:
drawing the interface graphics in the rendering target buffer area to a screen.
Optionally, the acquiring a region in the three-dimensional scene, which is occluded by the interface graph, further includes:
obtaining RGBA information of pixels in the interface graph according to the interface graph in the rendering target buffer area;
and obtaining an area which is shielded by an interface graph in the three-dimensional scene according to the RGBA information.
According to a third aspect of the present invention, there is also provided a method for rendering a three-dimensional scene, wherein the rendering is ray tracing rendering, the method comprising:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
setting a value of setting data of the three-dimensional scene corresponding to the area as a target value, wherein the setting data is data representing content visibility in ray tracing rendering, and the target value enables the area to be omitted when rendering of the three-dimensional scene is carried out;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
According to a fourth aspect of the present invention, there is also provided a rendering method of a three-dimensional scene, including:
responding to a set display refreshing event, and loading data of a next frame of three-dimensional scene matched with user operation;
acquiring an area in the next three-dimensional scene, which is shielded by the next interface graph, according to the data;
setting a value of setting data corresponding to the region of the next frame of three-dimensional scene as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering of the next frame of three-dimensional scene is carried out;
and rendering the next frame of three-dimensional scene to a screen according to the set data of the next frame of three-dimensional scene after setting, and rendering the next frame of interface graphics to the screen.
Optionally, the three-dimensional scene is a three-dimensional game scene, and the user operation includes any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
Optionally, the three-dimensional scene is a three-dimensional game scene, and the display refresh event includes at least one of expiration of a refresh time determined according to a set frame rate, reception of a refresh instruction triggered externally, and restoration of a network connection with the server.
According to a fifth aspect of the present invention, there is also provided a rendering method of a three-dimensional scene, including:
responding to a set display refreshing event, and loading data of a next frame of interface graphics matched with user operation;
acquiring an area in a next three-dimensional scene, which is shielded by the next frame of interface graphics, according to the data;
setting a value of setting data corresponding to the region of the next frame of three-dimensional scene as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering of the next frame of three-dimensional scene is carried out;
and rendering the next frame of three-dimensional scene to a screen according to the set data of the next frame of three-dimensional scene after setting, and rendering the next frame of interface graphics to the screen.
Optionally, the three-dimensional scene is a three-dimensional game scene, and the user operation includes any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
According to a sixth aspect of the present invention, there is also provided a rendering method of a three-dimensional scene, including;
acquiring an invisible area of the three-dimensional scene;
setting a numerical value of setting data of the three-dimensional scene corresponding to the invisible area as a target value, wherein the setting data is data representing content visibility, and the target value enables the invisible area to be omitted when rendering of the three-dimensional scene is carried out;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
Optionally, the acquiring the invisible area of the three-dimensional scene includes any one or a combination of any of the following:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
acquiring a region corresponding to an object with an imaging size smaller than a preset threshold value in the three-dimensional scene;
and acquiring a non-attention area of a current visual angle main body in the three-dimensional scene.
Optionally, the acquiring an invisible area of the three-dimensional scene includes:
providing a setting entrance for setting the invisible area;
acquiring the invisible area input through the setting entrance.
Optionally, the acquiring an invisible area of the three-dimensional scene includes:
providing a setting entrance for setting a visible area in the three-dimensional scene;
acquiring the visible region input through the setting entrance;
and obtaining the invisible area according to the visible area.
According to a seventh aspect of the present invention, there is also provided an apparatus for rendering a three-dimensional scene, comprising:
the occlusion relation acquisition module is used for acquiring an area occluded by an upper layer graph in the three-dimensional scene;
a data setting module, configured to set a value of setting data of the three-dimensional scene, which corresponds to the region, as a target value, where the setting data is data representing content visibility, and the target value is such that the region is omitted when rendering the three-dimensional scene;
and the rendering module is used for rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
According to an eighth aspect of the present invention, there is also provided an electronic device, comprising the apparatus described in the sixth aspect of the present invention; alternatively, the apparatus comprises:
a memory for storing executable commands;
a processor for performing the method as described in any of the first to fifth aspects of the invention under control of the executable command.
According to a ninth aspect of the present invention, there is also provided a computer readable storage medium storing executable instructions which, when executed by a processor, perform the method as described in the first to fifth aspects of the present invention.
According to the rendering method in the embodiment of the invention, the area which is shaded by the upper layer of graph in the three-dimensional scene is obtained, and then the visibility data corresponding to the shaded area is set, so that the area is omitted during rendering, thereby avoiding unnecessary expenses and being beneficial to improving the rendering performance.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of a terminal device displaying a three-dimensional scene.
Fig. 2 is a schematic diagram of a rendering display process.
Fig. 3 is a schematic diagram of a hardware configuration of a three-dimensional scene rendering method that can be used to implement an embodiment of the present invention.
Fig. 4 is a flowchart of a three-dimensional scene rendering method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of the depth test principle.
Fig. 6 is a flowchart of another three-dimensional scene rendering method according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of an example provided by an embodiment of the present invention.
Fig. 8 is a schematic diagram of another example provided by an embodiment of the present invention.
Fig. 9 is a schematic diagram of a three-dimensional scene rendering apparatus according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< general concept >
In the rendering display of the three-dimensional scene, there may be a case where the three-dimensional scene is occluded by other upper layer graphics. For example, when a three-dimensional scene is displayed by the terminal device in fig. 1, a dialog box related to the exit operation is also displayed on the upper layer of the three-dimensional scene. In addition, the character in the three-dimensional scene is displayed with a "player: zhang three is the text of the content, and the text is also always positioned at the upper layer of the three-dimensional scene. Areas in the three-dimensional scene which are shielded by upper-layer graphics such as interface graphics and overhead characters cannot be displayed on a screen finally, if the areas are avoided being rendered in advance, corresponding hardware overhead can be reduced, and rendering performance is improved.
In addition, a test mechanism for cutting off the fragments in the three-dimensional scene exists in the rendering display. As shown in fig. 2, for a fragment in a three-dimensional scene, a depth test, a template test, and the like may be performed in sequence, and the fragment passing the test may be finally displayed on a screen after subsequent processing. For a fragment that has not been subjected to a certain test, the fragment is discarded, i.e. it is not further processed. The above test is performed based on buffer data corresponding to the three-dimensional scene, e.g. based on depth values in a depth buffer, template values in a template buffer, etc. By setting and modifying the values of these data, it is possible to control whether the corresponding fragment is rendered to the screen.
Based on the two aspects, the embodiment of the invention provides a rendering method, which includes the steps of firstly obtaining an area in a three-dimensional scene, which is shielded by an upper layer of graphics, and then setting visibility data corresponding to the shielded area, so that the area is omitted during rendering, thereby avoiding unnecessary overhead and improving rendering performance.
< hardware configuration >
FIG. 1 shows a schematic diagram of a hardware device that may be used to implement an embodiment of the invention.
As shown in fig. 1, the hardware device is a terminal device 100, and includes a processor 110, a memory 120, a communication device 130, an interface device 140, an input device 150, and a display device 160.
The processor 110 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 120 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The communication device 130 is capable of wired or wireless communication, for example. The interface device 140 includes, for example, a USB interface, a headphone interface, and the like. The input device 150 may include, for example, a touch screen, a keyboard, and the like. The display device 160 may be used to display a three-dimensional scene, including, for example, a liquid crystal display, a touch screen display, and the like.
In this embodiment, the display device 160 further includes a graphics card including a Graphics Processor (GPU) dedicated to graphics rendering.
The three-dimensional scene rendering method provided by the embodiment of the present invention may be executed by the processor 110, or may be executed by the GPU, which is not limited thereto.
The terminal device 100 may be any device supporting three-dimensional scene display, such as a smart phone, a laptop, a desktop computer, and a tablet computer.
In an embodiment of the present invention, the memory 120 of the terminal device 100 is configured to store instructions for controlling the processor 110 or the GPU to operate so as to support implementation of a rendering method of a three-dimensional scene according to any embodiment of the present invention. Those skilled in the art can design instructions in accordance with the teachings of the present invention. How instructions control the operation of the processor is well known in the art and will not be described in detail herein.
It should be understood by those skilled in the art that although a plurality of devices of the terminal device 100 are shown in fig. 1, the terminal device 100 of the embodiment of the present invention may only relate to some of the devices, for example, only relate to the processor 110, the memory 120, the display device 160, and the like.
The hardware devices depicted in FIG. 1 are illustrative only and are not intended to limit the present invention, its application, or uses in any way.
< method examples >
The present embodiment provides a method for rendering a three-dimensional scene, which is implemented by, for example, the terminal device 100 in fig. 1. As shown in fig. 4, the method includes the following steps S1100-S1300:
step S1100, acquiring an area in the three-dimensional scene, which is shielded by the upper layer graph.
The three-dimensional scene in the present embodiment is a stereoscopic image displayed by a three-dimensional technique, for example, a stereoscopic picture in a game, and for example, a stereoscopic picture in a merchandise display.
In one example, the display processing of the three-dimensional scene is divided into two stages of modeling and rendering, wherein the modeling is to establish a three-dimensional model containing key parameters of an object, and the rendering is to increase the effects of visual angle, shadow, texture and the like of the established three-dimensional model and display the three-dimensional model on a screen.
In this embodiment, the upper layer graphics are graphics that are displayed on the three-dimensional scene and that can block the three-dimensional scene, for example, interface graphics in the user interface, overhead characters of a character in the three-dimensional game, and a "small window" for video playing. The upper layer pattern in this embodiment may be a two-dimensional pattern or a three-dimensional pattern, which is not limited herein.
In this embodiment, due to the occlusion of the three-dimensional scene by the upper layer graphics, the occluded region becomes invisible or almost invisible when finally displayed on the screen.
In an example, the specific implementation of acquiring the area in the three-dimensional scene, which is blocked by the upper layer graph, is as follows: and obtaining an area shielded by the upper layer graph in the three-dimensional scene according to the RGBA information of the pixels in the upper layer graph.
A pixel is the smallest unit or element that is indivisible in an image. Each pixel in the upper layer graph has RGBA information corresponding thereto, where R represents Red (Red), G represents Green (Green), B represents Blue (Blue), and a represents opacity (Alpha).
In this example, since the upper layer graph is always located on the three-dimensional scene, whether the three-dimensional scene corresponding to the pixel position is visible or not can be judged according to the RGBA information of the pixel in the upper layer graph, that is, the occlusion relationship between the upper layer graph and the three-dimensional scene is obtained.
In a more specific example, the area of the three-dimensional scene that is occluded by the upper layer graphics can be obtained according to the opacity of the pixels in the upper layer graphics.
The opacity of a pixel may reflect the degree of transparency or visibility of the pixel itself. For a pixel which is completely transparent in the upper layer of graph, the lower three-dimensional scene can be displayed through the pixel, so that the pixel can not shield the three-dimensional scene. For a pixel in the upper layer of graphics that is completely opaque, the pixel cannot be penetrated by the lower three-dimensional scene, and therefore the pixel can block the three-dimensional scene. For a semitransparent pixel, the shielding condition of the pixel on the three-dimensional scene can be judged according to the specific opacity degree of the pixel.
In this example, the pixel opacity ranges from 0% to 100%. Wherein, an opacity of 0% means that the pixel is completely transparent, an opacity of 100% means that the pixel is completely opaque, and an opacity located between the two means that the pixel is semi-transparent.
In other more specific examples, the region in the three-dimensional scene that is occluded by the upper graphics may also be obtained according to the RGB color information of the pixels in the upper graphics. For example, for a pixel in the upper layer of graphics, whose RGB color is pure black, the pixel is considered to cause occlusion to the underlying three-dimensional scene. The specific judgment standard can be determined by combining the actual situation, and is not limited herein.
After the area in the three-dimensional scene, which is covered by the upper layer graph, is obtained, the following step S1200 is executed:
in step S1200, a value of setting data corresponding to an occluded area of the three-dimensional scene is set as a target value, wherein the setting data is data indicating content visibility, and the target value is such that the area is omitted when rendering the three-dimensional scene.
In the present embodiment, the setting data is data indicating the visibility of the content. For example, when the value of the setting data is the first value, the three-dimensional scene area corresponding to the setting data is visible, that is, the three-dimensional scene area corresponding to the setting data is rendered to the screen. When the value of the setting data is a second value different from the first value, the three-dimensional scene area corresponding to the setting data is invisible, that is, the three-dimensional scene area corresponding to the setting data is not rendered to the screen.
In the present embodiment, the setting data is, for example, depth data in a depth buffer, template data in a template buffer, data indicating content visibility in ray tracing rendering, or the like.
In this embodiment, the target value of the data is set so that the occluded region in the three-dimensional scene is omitted during rendering. For example, the target value is the first numerical value, and the set data corresponding to the occluded area is set as the target value, so that the occluded area is not rendered on the screen, that is, the occluded area is omitted during rendering.
In an example, the setting a value of the setting data corresponding to the occluded area of the three-dimensional scene as a target value includes: acquiring pixels corresponding to the shielded area; searching a numerical value corresponding to each pixel in a buffer area of the three-dimensional scene for storing set data; and modifying each searched numerical value into a target value.
In this example, the area of the three-dimensional scene occluded by the upper layer graphic corresponds to a plurality of pixels, and for each pixel therein, the three-dimensional scene is occluded by the upper layer graphic at that pixel position.
When the area in the three-dimensional scene that is occluded by the upper layer graph is obtained in step S1100, a pixel-by-pixel processing mode may be adopted, that is, each pixel in the three-dimensional scene is traversed, the occlusion condition of the three-dimensional scene corresponding to each pixel position and the upper layer graph is determined, and the pixels that correspond to the three-dimensional scene that are occluded by the upper layer graph are aggregated, so that the area in the three-dimensional scene that is occluded by the upper layer graph is obtained. In the above manner, the pixels corresponding to the blocked area are obtained at the same time.
In this example, the buffer area for storing the setting data of the three-dimensional scene is a special area for storing the setting data in the memory, and includes, for example, a depth buffer area, a stencil buffer area, and the like. By reading and writing the buffer area, the value of the set data corresponding to each pixel can be searched and modified into the target value.
In one example, the setting data in step S1200 is depth data, and the target value in step S1200 is such that the occluded area cannot pass the depth test in the rendering display, i.e. rendering of the occluded area is avoided based on the depth test.
In this example, the depth test is a process of selecting or rejecting display content based on depth data, and can solve the visibility problem at a specific viewing angle caused by mutual occlusion between three-dimensional models.
Referring to fig. 5, the camera (virtual in the rendered display), the P1 plane, and the P2 plane are shown in fig. 5 in sequence from left to right, where the P1 plane and the P2 plane are each models of a three-dimensional scene. A1 and a2 are points lying in the P1 plane and the P2 plane, respectively, and both the a1 point and the a2 point are imaged on pixel a in the camera. The distance from the point A1 to the camera is the depth Z1 from the point A1, and the distance from the point A2 to the camera is the depth Z2 from the point A2. It is readily understood that the point a1 is closer to the camera than the point a2, i.e., Z1 is less than Z2, so the point a1 obscures the point a2 during imaging so that the point a2 is not displayed at pixel a.
For the case shown in fig. 5, the specific procedure of the depth test is, for example: first, the depth data of the corresponding pixel a in the depth buffer (depth buffer) is initially set, for example, to the depth value Max of the far clipping surface. The far clipping plane here is the farthest rendering position relative to the camera, and similarly, the near clipping plane is the closest rendering position relative to the camera. And then, when the specific content of the three-dimensional scene is drawn, comparing whether the depth value of the point to be drawn is smaller than the current depth value in the depth buffer area, if so, drawing the point and updating the depth value in the depth buffer area to the depth value of the point, and if not, discarding the point and not processing the point. For example, when the a1 point is drawn first, the depth value Z1 of the a1 point is compared with the initial depth value Max, and it is easy to know that Z1 < Max, so (the color of) the a1 point is drawn at the pixel a, and the depth value in the above depth buffer is updated to Z1. When the point a2 is redrawn, the depth value Z2 of the point a2 is compared with the current depth value Z1 in the depth buffer, and it is easy to know that Z2 > Z1, so the point a2 is discarded and no drawing is performed. Therefore, points which are close to the camera in the three-dimensional scene can be drawn on the screen through the depth test, and points which are shielded by the points are not drawn on the screen.
In this example, the depth test is applied to occlusion displays of a three-dimensional scene and an upper layer graphic. And for the area which is shielded by the upper layer graph in the three-dimensional scene, setting the numerical value of the depth data in the depth buffer area corresponding to the shielded area as a target value which enables the shielded area not to pass the depth test.
In this example, the value of the depth data in the depth buffer corresponding to the occluded region may be set based on the pixel corresponding to the occluded region. For example, the occluded region includes X, Y, Z pixels, depth data x, y, and z sequentially corresponding to X, Y, Z pixels exist in the depth buffer, and the values of x, y, and z are set as target values, that is, the value setting of the depth data corresponding to the occluded region is realized.
In this example, the target value is a value that makes the occluded region unable to pass the depth test, i.e., the target value makes each pixel corresponding to the occluded region unable to pass the depth test. For example, in the case that the passing condition of the depth test is that the depth value to be tested is smaller than the current depth value in the depth buffer, the target value is selected to be 0, that is, the initial value of the depth data in the depth buffer corresponding to each pixel in the occluded area is set to be 0. Here, the depth data is normalized, and the depth of the near cutting plane is 0 and the depth of the far cutting plane is 1. Thus, because the depth value of each point in the three-dimensional scene is greater than or equal to 0, for each pixel corresponding to the blocked area, the depth value of the point on the corresponding three-dimensional model does not satisfy the passing condition that the depth value is less than 0, and the pixel cannot be drawn on the screen, that is, the pixel cannot pass the depth test.
In this example, the target value may be determined according to a specific test passing condition, and is not limited herein.
In the example, the area which is shaded by the upper layer graph in the three-dimensional scene can be effectively avoided based on the depth test, and the performance consumption is low.
In one example, the setting data in step S1200 is the stencil data in the stencil buffer, and the target value in step S1200 is such that the occluded area cannot pass the stencil test in the rendering display, i.e., rendering of the occluded area is avoided based on the stencil test.
In this example, the template test is a process of accepting or rejecting display content based on template data, and the implementation process is similar to the depth test, and is to compare a template value of a point on the three-dimensional model with a current template value in the template buffer area, and draw a point satisfying a test passing condition onto a corresponding pixel. The template test can be used for realizing the effects of displaying the object frame in the three-dimensional scene and the like. For example, the template value corresponding to the border region of the object is set to 1 in advance, and the template value corresponding to the non-border region is set to another value. And setting the passing condition of the template test as that the template value to be tested is equal to the current template value in the template buffer area, and setting the initial template value in the template buffer area as 1. It is readily understood that only the bezel area passes the stencil test and is drawn to the screen.
In this example, the stencil test is applied to the occlusion display of the three-dimensional scene and the upper graphics. And for the area which is shielded by the upper layer graph in the three-dimensional scene, setting the numerical value of the template data in the template buffer area corresponding to the shielded area as a target value which enables the shielded area not to pass the template test. For example, in the case that the template values of the three-dimensional scene are all greater than 0, and the template test passing condition is that the template value to be tested is equal to the current template value in the template buffer area, the initial value of the template data of the template buffer area corresponding to the occluded area is set to-1. Thus, the occluded area cannot pass the stencil test and is not rendered to the screen.
In this example, rendering of occluded regions can be effectively avoided based on the stencil test, and performance consumption is low.
It should be noted that the setting data may be set as the target value based on any one of the depth test and the stencil test, or may be set as the target value based on both tests, which is not limited herein.
After setting the setting data as the target value, the following step S1300 is executed:
and step 1300, rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
In this embodiment, the three-dimensional scene is rendered according to the set data of the set three-dimensional scene. It is easy to understand that the area in the three-dimensional scene, which is occluded by the upper graphics, is omitted during rendering, and the area which is not occluded by the upper graphics is rendered to the screen in a normal manner. For the specific rendering situation, reference may be made to the depth test and the template test process involved in the description of the target value, and details are not repeated here.
The rendering method in the embodiment first obtains the area in the three-dimensional scene, which is shaded by the upper layer of graphics, and then sets the visibility data corresponding to the shaded area, so that the area is omitted during rendering, thereby avoiding unnecessary overhead and being beneficial to improving the rendering performance.
The embodiment also provides another rendering method for a three-dimensional scene, which is suitable for a situation where an interface graph and the three-dimensional scene are displayed together, and can be implemented by the terminal device 100 shown in fig. 3 as well. As shown in fig. 6, the method includes the following steps S2100-S2300:
step S2100, acquiring a region shielded by an interface graph in a three-dimensional scene.
In step S2200, a value of setting data corresponding to an occluded region of the three-dimensional scene is set as a target value, where the setting data is data indicating content visibility, and the target value is such that the region is omitted when rendering the three-dimensional scene.
Step S2300, rendering the three-dimensional scene to a screen and rendering the interface graphics to the screen according to the set data of the set three-dimensional scene.
Specific embodiments of steps S2100-S2300 are described above with reference to steps S1100-S1300.
The interface graph in the embodiment is a graph of an interactive interface in human-computer interaction. The interface graphic is displayed over the three-dimensional scene.
In an example, the three-dimensional graphics rendering method provided by the present embodiment further includes: and drawing the interface graph to a rendering target buffer area. At this time, the step of rendering the interface graphic to the screen in step S2300 includes: drawing the interface graphics in the rendering target buffer area to a screen.
In this example, the render-target buffer is a memory area in the render display that may be used to store rendered graphics. The interface graph is firstly drawn to the rendering target buffer area, and then the interface graph in the rendering target buffer area is drawn to the screen, so that the interface graph and the three-dimensional scene can be synchronously displayed on the screen.
In other embodiments, the interface graphics may also be directly drawn on the screen, which is not limited herein.
In this example, step S2100 acquires a region in the three-dimensional scene that is occluded by the interface graph, including: obtaining RGBA information of pixels in the interface graph according to the interface graph in the rendering target buffer area; and obtaining an area which is shielded by the interface graph in the three-dimensional scene according to the RGBA information. That is, in this example, RGBA information of the interface graphic is acquired based on the interface graphic drawn in the rendering target buffer. By the method, the RGBA information of the finally displayed interface graph can be conveniently acquired.
Fig. 7 is an example to explain an implementation process of the three-dimensional scene rendering method in the present embodiment. In the example shown in fig. 7, the interface graphic 7-2 related to the quit operation is displayed on the three-dimensional scene 7-1, and the final display effect is shown as 7-5 in fig. 7. When rendering the three-dimensional scene, firstly, an area 7-3 which is shielded by the interface graph 7-2 in the three-dimensional scene 7-1 is obtained. Firstly, drawing the interface graph 7-2 to a rendering target buffer area, wherein the graph in fig. 7 is only used for demonstrating the rendering process, and does not represent that the related graph is already displayed on a screen, the interface graph 7-2 is a rectangle, a text prompt of whether to quit and selection buttons corresponding to yes and no are displayed, and the opacity of all pixels corresponding to the interface graph 7-2 is 100%, that is, all pixels corresponding to the interface graph are not transparent, and the three-dimensional scene below can be shielded. Accordingly, the region 7-3 in the three-dimensional scene that is occluded by the interface graphic 7-2 can be obtained. The value of the setting data corresponding to the area 7-3 in the three-dimensional scene is modified, for example, the depth value in the depth buffer corresponding to the area 7-3 in the three-dimensional scene 7-1 is modified to 0, so that none of the depth values of the points on the corresponding three-dimensional model satisfies the passing condition that the depth value is less than 0, and cannot be drawn on the screen. And then, rendering the three-dimensional scene according to the set data of the set three-dimensional scene, wherein the obtained rendering result is shown as 7-4, and it can be seen that the area 7-3 shielded by the interface graph 7-2 in the three-dimensional scene is not rendered and is represented as a black area in 7-4. And drawing the interface graph 7-2 in the rendering target buffer area to a screen while rendering the three-dimensional scene, so that the three-dimensional scene 7-1 and the interface graph 7-2 are jointly displayed, and the final rendering result is shown as 7-5.
The present embodiment also provides another method for rendering a three-dimensional scene, where the rendering in the method is ray tracing rendering, and the method may also be implemented by the terminal device 100 shown in fig. 3, and includes the following steps S3100 to S3300:
step S3100, acquiring a region shielded by an upper layer graph in the three-dimensional scene.
The upper layer graphics may be, for example, interface graphics, or overhead characters of characters in a game application, and is not limited herein.
In step S3200, a value of setting data corresponding to a blocked region of the three-dimensional scene is set as a target value, where the setting data is data indicating content visibility in ray tracing rendering, and the target value is such that the region is omitted when rendering the three-dimensional scene.
And S3300, rendering the three-dimensional scene to a screen and rendering the interface graph to the screen according to the set data of the set three-dimensional scene.
Specific embodiments of steps S3100-S3300 can be found above in the description of steps S1100-S1300.
In this embodiment, the ray tracing rendering is a rendering method of a three-dimensional scene, and a more accurate three-dimensional display effect can be achieved by obtaining the color of the three-dimensional model displayed by each pixel according to the light from the observer.
In this embodiment, the setting data is data indicating the visibility of content in the ray tracing rendering, and whether to perform ray tracing on the pixel may be determined according to the setting data. For example, in one example, the setting data may be set to 0 or 1, the setting data corresponding to a pixel is set to 1, which indicates that the ray tracing process is performed on the pixel, and the setting data corresponding to a pixel is set to 0, which indicates that the ray tracing process is not performed on the pixel. It is easily understood that in this example, the target value of the area where the three-dimensional scene is occluded by the upper layer graphics is 0, and the target value of the area where the three-dimensional scene is not occluded is 1. Therefore, the shaded area can be avoided being rendered in ray tracing rendering, and the rendering performance of ray tracing rendering is improved.
The present embodiment further provides another rendering method for a three-dimensional scene, for example, implemented by the terminal device 100 shown in fig. 3, where the method includes the following steps S4100-S4400:
step S4100, in response to the set display refresh event, loading data of the next frame of three-dimensional scene matched with the user operation.
In an example of applying the rendering method of the present embodiment to rendering of a three-dimensional game scene, the set display refresh event may include at least one of expiration of a refresh time determined according to a set frame rate, reception of a refresh instruction triggered externally, and restoration of a network connection with a server, for example.
For example, if the frame rate is set to 60 frames/second, that is, if 60 frames are refreshed per second, when the refresh time determined according to the frame rate is up, the data of the next frame of three-dimensional scene matched with the user operation is loaded for rendering, so as to implement the refresh display.
For another example, when the user triggers a refresh command through a refresh button set on the game interface, the terminal device 100 also loads the data of the next frame of three-dimensional scene matching with the user operation in response to the refresh command to perform rendering, so as to implement refresh display.
For another example, after the terminal device 100 is disconnected from the server due to a network abnormality, the terminal device 100 acquires the latest basic data from the server after restoring the network connection with the server, and at this time, renders the data describing the next frame of the three-dimensional scene matching the user operation based on the basic data to implement the refresh display.
In an example, the three-dimensional scene in this embodiment is a three-dimensional game scene, and the user operation includes any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
In this embodiment, a specific user operation may cause a change in the next three-dimensional scene and/or the next interface graphics. For example, as shown in fig. 8, in 8-1, only a three-dimensional scene exists in the currently displayed content, and at this time, the user performs a tap key operation, which can invoke the exit interface shown in 8-2, and the exit interface is displayed as the next frame of interface graphics, that is, the next frame of interface graphics changes from the existing interface graphics. For another example, if the user operates a game character to move in a three-dimensional scene, the next three-dimensional scene will be related to the movement position and movement direction of the game character, i.e., will match the user operation.
Step S4200, according to the loaded data, obtaining an area in the next frame of three-dimensional scene, which is shielded by the next frame of interface graphics.
In the event that the user action would change the interface graphics, the next frame of interface graphics would also match the user action.
In step S4300, a value of setting data corresponding to the region of the next three-dimensional scene is set as a target value, where the setting data is data indicating content visibility, and the target value is such that the region is omitted when rendering the next three-dimensional scene.
Step S4400, rendering the next three-dimensional scene to the screen according to the set data of the set next three-dimensional scene, and rendering the next interface image to the screen.
The refreshing of the display content of the next frame will be completed, through step S4400.
Steps S4200-S4400 can be as described above for steps S1100-S1300.
The present embodiment further provides another method for rendering a three-dimensional scene, for example, the method is implemented by the terminal device 100 shown in fig. 3, and the method includes the following steps S5100 to S5400:
step S5100, in response to the set display refresh event, obtains the next frame of interface graphics matched with the user operation.
In an example, the three-dimensional scene in this embodiment is a three-dimensional game scene, and the user operation includes any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
In this embodiment, a specific user operation corresponds to a specific interface graphic. For example, as shown in FIG. 8, the exit interface shown in FIG. 8-2 corresponds to a user tapping a button operation in FIG. 8-1.
In this embodiment, the terminal device, in response to a user operation, acquires interface graphic information corresponding to the user operation according to a preset corresponding relationship, for example, acquires parameters such as content, position, size, opacity, and the like of an interface graphic.
Step S5200, acquiring an area in the next three-dimensional scene, which is covered by the next interface image.
In the case where the user operation would change the three-dimensional scene, the next frame of the three-dimensional scene would also match the user operation.
In step S5300, a value of setting data corresponding to the area of the next frame of three-dimensional scene is set as a target value, where the setting data is data indicating content visibility, and the target value is such that the area is omitted when rendering of the next frame of three-dimensional scene is performed.
And step S5400, rendering the next frame of three-dimensional scene to a screen according to the set data of the next frame of three-dimensional scene after setting, and rendering the next frame of interface graphics to the screen.
Continuing with the example shown in fig. 8, before performing display refresh, if the user touches the exit operation triggered by the key, when performing display refresh, the mobile terminal 100, according to the detected user operation of the key, acquires an interface graphic corresponding to the operation as a next frame interface graphic, for example, an exit interface shown in 8-2 corresponding to the operation, and when performing rendering of the next frame, acquires an area in the next frame three-dimensional scene that is blocked by the exit interface, to set a value of setting data corresponding to the area of the next frame three-dimensional scene as a target value, so that the area is omitted when performing rendering of the next frame three-dimensional scene. In this example, after the rendering display of the next frame of three-dimensional scene and the next frame of interface graphics is completed according to step S5400, a display screen of 8-3 is obtained.
The above steps S5200 to S5400 can be referred to as the above description of steps S1100 to S1300.
The present embodiment further provides another rendering method for a three-dimensional scene, for example, implemented by the terminal device 100 shown in fig. 3, where the method includes the following steps S6100 to S6300;
in step S6100, an invisible area of the three-dimensional scene is acquired.
In one example, an invisible region of a three-dimensional scene is acquired, including any one or combination of any of: acquiring an area in a three-dimensional scene, which is shielded by an upper layer of graph; acquiring a region corresponding to an object with an imaging size smaller than a preset threshold value in a three-dimensional scene; and acquiring a non-attention area of a current visual angle subject in the three-dimensional scene.
In this example, the area in the three-dimensional scene that is occluded by the upper layer graphics is obtained, which can be referred to the above description of step S1100.
In this example, the imaged size is the size of the image on the screen of an object in the three-dimensional scene. The imaging size of the object can be obtained according to the distance from the object to the screen, the size of the object itself and other information. The preset threshold is a preset imaging size. When the imaging size of the object is smaller than the preset threshold, the size of the image corresponding to the object is significantly smaller than the size of the image corresponding to the entire three-dimensional scene, and the region corresponding to the object may be considered as an invisible region. During three-dimensional rendering, rendering calculation is not needed for the area corresponding to the object, and processing modes such as direct background display and the like can be adopted.
In this example, the current perspective subject is, for example, a person viewing a three-dimensional scene from a main perspective. For perspective subjects with different characters, identities, etc., the regions of interest (or regions of interest) when viewing a three-dimensional scene are different. For example, for a female person, details of a flower, a butterfly, and the like in a scene are generally interested in the female person, but a male person generally does not pay special attention to the details, so that a region corresponding to the flower, the butterfly, and the like can be considered as a non-attention region of the male person. During three-dimensional rendering, rendering calculation is not needed in a non-attention area of the current visual angle main body, and processing modes such as direct background display and the like can be adopted.
In one example, acquiring an invisible region of a three-dimensional scene includes: providing a setting entrance for setting the invisible area; an invisible area input by the setting entry is acquired.
In this example, the user can make custom settings for the invisible area. For example, the user may cancel the particle effect in the frame, i.e., set the region corresponding to the particle in the three-dimensional scene as the invisible region. The invisible area can be directly obtained according to data input by the user through the setting portal.
In one example, acquiring an invisible region of a three-dimensional scene includes: providing a setting entrance for setting a visible area in a three-dimensional scene; acquiring a visible area input through a setting entrance; from the visible region, an invisible region is obtained.
In this example, the user may perform a custom setting on the visible region, and the portions outside the visible region are all invisible regions, that is, the user performs a reverse setting on the invisible regions.
In step S6200, a value of setting data corresponding to the invisible area of the three-dimensional scene is set as a target value, wherein the setting data is data representing content visibility, and the target value is such that the invisible area is omitted when rendering the three-dimensional scene.
And S6300, rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
Steps S6200-S6300 described above can be referred to the description of steps S1200-S1300 above.
< apparatus embodiment >
The present embodiment provides a three-dimensional scene rendering apparatus, for example, the three-dimensional scene rendering apparatus 700 shown in fig. 9, which includes an occlusion relation obtaining module 710, a data setting module 720, and a rendering module 730.
The occlusion relation obtaining module 710 may be configured to obtain an area occluded by an upper layer graph in the three-dimensional scene. The upper layer graphics are interface graphics such as human-computer interaction interface graphics, for example.
The data setting module 720 may be configured to set a value of setting data corresponding to a region of the three-dimensional scene as a target value, wherein the setting data is data indicating content visibility, and the target value is such that the region is omitted when rendering the three-dimensional scene.
The rendering module 730 may be configured to render the three-dimensional scene to the screen according to the set data of the set three-dimensional scene.
In one example, the data setting module 720, when setting the value of the setting data corresponding to the region of the three-dimensional scene to the target value, may be configured to: acquiring pixels corresponding to the areas; searching a numerical value corresponding to each pixel in a buffer area of the three-dimensional scene for storing set data; and modifying each searched numerical value into a target value.
In one example, the data is set to depth data in a depth buffer, the target value being such that the region cannot pass a depth test in the rendered display.
In an example, the test passing condition of the depth test is that the depth value to be tested is smaller than the current depth value in the depth buffer, and the data setting module 720, when setting the value of the setting data corresponding to the area of the three-dimensional scene as the target value, may be configured to: and setting the numerical value of the set data of the three-dimensional scene corresponding to the area to be equal to the depth value corresponding to the near cutting surface.
In one example, the data is set to stencil data in a stencil buffer, the target value being such that the region cannot pass a stencil test in the rendered display.
In one example, the occlusion relation obtaining module 710, when obtaining the area occluded by the upper layer graph in the three-dimensional scene, may be configured to: and obtaining an area shielded by the upper layer graph in the three-dimensional scene according to the RGBA information of the pixels in the upper layer graph.
In one example, the RGBA information for a pixel in the upper layer graphic includes an opacity of the pixel in the upper layer graphic.
In another embodiment, the occlusion relation obtaining module 710 may be configured to obtain an area occluded by an interface graph in a three-dimensional scene; the data setting module 720 may be configured to set a value of setting data corresponding to a region of the three-dimensional scene as a target value, where the setting data is data indicating content visibility, and the target value is such that the region is omitted when rendering the three-dimensional scene; and the rendering module 730 may be configured to render the three-dimensional scene to the screen and render the interface graphics to the screen according to the set data of the set three-dimensional scene.
For example, the three-dimensional scene may be rendered to the screen first, and then the interface graphics may be rendered to the screen.
In an example, the three-dimensional scene rendering apparatus may further include a drawing module, configured to draw the interface graphics to the rendering destination buffer, so that the rendering module 730 may draw the interface graphics in the rendering destination buffer to the screen when performing rendering of the interface graphics to the screen.
In one example, the occlusion relation obtaining module 710, when obtaining an area occluded by an interface graph in a three-dimensional scene, may be configured to: obtaining RGBA information of pixels in the interface graph according to the interface graph in the rendering target buffer area; and obtaining an area which is shielded by the interface graph in the three-dimensional scene according to the RGBA information.
In another embodiment, the rendering is a ray tracing rendering, and the occlusion relation obtaining module 710 may be configured to obtain an area occluded by an upper layer graph in the three-dimensional scene; the data setting module 720 may be configured to set a value of setting data corresponding to a region of the three-dimensional scene as a target value, where the setting data is data representing content visibility in ray tracing rendering, and the target value is such that the region is omitted when rendering the three-dimensional scene; and the rendering module 730 may be configured to render the three-dimensional scene to the screen according to the set data of the set three-dimensional scene.
< electronic device embodiment >
An electronic device of this embodiment includes a three-dimensional scene rendering apparatus in an embodiment of the present invention; alternatively, the electronic device is the electronic device 800 shown in fig. 10, and includes:
a memory 810 for storing executable commands.
The processor 820 is configured to execute the picture processing method according to the first embodiment under the control of the executable command stored in the memory 810.
In this embodiment, the electronic device may be any terminal device, for example, a terminal device with a display device, such as a PC, a notebook computer, a tablet computer, a mobile phone, and a head-mounted device, which is not limited herein.
< computer-readable storage Medium embodiment >
The present embodiment provides a computer-readable storage medium storing executable instructions that, when executed by a processor, perform a method as described in method embodiments of the present invention.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (23)

1. A method of rendering a three-dimensional scene, comprising:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
setting a value of setting data of the three-dimensional scene corresponding to the region as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering the three-dimensional scene;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
2. The method according to claim 1, wherein the setting of the value of the setting data of the three-dimensional scene corresponding to the area to the target value comprises:
acquiring pixels corresponding to the region;
searching the numerical value corresponding to each pixel in a buffer area of the three-dimensional scene for storing the set data;
and modifying each searched numerical value into the target value.
3. The method of claim 1, wherein the setting data is depth data in a depth buffer, and the target value is such that the region fails a depth test in a rendered display.
4. The method of claim 3, wherein the test passing condition of the depth test is that the depth value to be tested is smaller than the current depth value in the depth buffer, and the setting the value of the setting data of the three-dimensional scene corresponding to the area to the target value comprises:
and setting the numerical value of the set data of the three-dimensional scene corresponding to the area to be equal to the depth value corresponding to the near cutting surface.
5. The method of claim 1, wherein the setting data is stencil data in a stencil buffer, and the target value is such that the region fails a stencil test in the rendered display.
6. The method of claim 1, wherein the acquiring of the area of the three-dimensional scene occluded by the upper layer graphics comprises:
and obtaining an area shielded by the upper layer graph in the three-dimensional scene according to the RGBA information of the pixels in the upper layer graph.
7. The method of claim 6, wherein the RGBA information for a pixel in the upper layer graphic includes an opacity of the pixel in the upper layer graphic.
8. A method of rendering a three-dimensional scene, comprising:
acquiring an area in the three-dimensional scene, which is shielded by an interface graph;
setting a value of setting data of the three-dimensional scene corresponding to the region as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering the three-dimensional scene;
rendering the three-dimensional scene to a screen and rendering the interface graph to the screen according to the set data of the set three-dimensional scene.
9. The method of claim 8, wherein the method further comprises:
drawing the interface graph to a rendering target buffer area;
the rendering the interface graphics to a screen includes:
drawing the interface graphics in the rendering target buffer area to a screen.
10. The method of claim 9, wherein the acquiring of the area of the three-dimensional scene occluded by the interface graphic further comprises:
obtaining RGBA information of pixels in the interface graph according to the interface graph in the rendering target buffer area;
and obtaining an area which is shielded by an interface graph in the three-dimensional scene according to the RGBA information.
11. A method of rendering a three-dimensional scene, wherein the rendering is a ray-tracing rendering, the method comprising:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
setting a value of setting data of the three-dimensional scene corresponding to the area as a target value, wherein the setting data is data representing content visibility in ray tracing rendering, and the target value enables the area to be omitted when rendering of the three-dimensional scene is carried out;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
12. A method of rendering a three-dimensional scene, comprising:
responding to a set display refreshing event, and loading data of a next frame of three-dimensional scene matched with user operation;
acquiring an area in the next three-dimensional scene, which is shielded by the next interface graph, according to the data;
setting a value of setting data corresponding to the region of the next frame of three-dimensional scene as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering of the next frame of three-dimensional scene is carried out;
and rendering the next frame of three-dimensional scene to a screen according to the set data of the next frame of three-dimensional scene after setting, and rendering the next frame of interface graphics to the screen.
13. The method of claim 12, wherein the three-dimensional scene is a three-dimensional game scene, and the user operation comprises any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
14. The method of claim 12, wherein the three-dimensional scene is a three-dimensional game scene, and the display refresh event comprises at least one of expiration of a refresh time determined according to a set frame rate, receipt of an externally triggered refresh command, and restoration of a network connection with a server.
15. A method of rendering a three-dimensional scene, comprising:
responding to a set display refreshing event, and loading data of a next frame of interface graphics matched with user operation;
acquiring an area in a next three-dimensional scene, which is shielded by the next frame of interface graphics, according to the data;
setting a value of setting data corresponding to the region of the next frame of three-dimensional scene as a target value, wherein the setting data is data representing content visibility, and the target value enables the region to be omitted when rendering of the next frame of three-dimensional scene is carried out;
and rendering the next frame of three-dimensional scene to a screen according to the set data of the next frame of three-dimensional scene after setting, and rendering the next frame of interface graphics to the screen.
16. The method of claim 15, wherein the three-dimensional scene is a three-dimensional game scene, and the user operation comprises any one or more of a keyboard operation, a mouse operation, a touch screen operation, a limb sensing operation, and a gravity sensing operation.
17. A rendering method of a three-dimensional scene comprises the following steps;
acquiring an invisible area of the three-dimensional scene;
setting a numerical value of setting data of the three-dimensional scene corresponding to the invisible area as a target value, wherein the setting data is data representing content visibility, and the target value enables the invisible area to be omitted when rendering of the three-dimensional scene is carried out;
and rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
18. The method of claim 17, wherein the acquiring of the invisible region of the three-dimensional scene comprises any one or a combination of any of:
acquiring an area in the three-dimensional scene, which is shielded by an upper layer of graph;
acquiring a region corresponding to an object with an imaging size smaller than a preset threshold value in the three-dimensional scene;
and acquiring a non-attention area of a current visual angle main body in the three-dimensional scene.
19. The method of claim 17, wherein said acquiring an invisible region of the three-dimensional scene comprises:
providing a setting entrance for setting the invisible area;
acquiring the invisible area input through the setting entrance.
20. The method of claim 17, wherein said acquiring an invisible region of the three-dimensional scene comprises:
providing a setting entrance for setting a visible area in the three-dimensional scene;
acquiring the visible region input through the setting entrance;
and obtaining the invisible area according to the visible area.
21. An apparatus for rendering a three-dimensional scene, comprising:
the occlusion relation acquisition module is used for acquiring an area occluded by an upper layer graph in the three-dimensional scene;
a data setting module, configured to set a value of setting data of the three-dimensional scene, which corresponds to the region, as a target value, where the setting data is data representing content visibility, and the target value is such that the region is omitted when rendering the three-dimensional scene;
and the rendering module is used for rendering the three-dimensional scene to a screen according to the set data of the set three-dimensional scene.
22. An electronic device comprising the apparatus of claim 21; alternatively, the apparatus comprises:
a memory for storing executable commands;
a processor for performing the method of any of claims 1-20 under the control of the executable command.
23. A computer-readable storage medium storing executable instructions that, when executed by a processor, perform the method of any of claims 1-20.
CN201910888032.2A 2019-09-19 2019-09-19 Three-dimensional scene rendering method and device and electronic equipment Pending CN112541960A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910888032.2A CN112541960A (en) 2019-09-19 2019-09-19 Three-dimensional scene rendering method and device and electronic equipment
PCT/CN2020/115761 WO2021052392A1 (en) 2019-09-19 2020-09-17 Three-dimensional scene rendering method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910888032.2A CN112541960A (en) 2019-09-19 2019-09-19 Three-dimensional scene rendering method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112541960A true CN112541960A (en) 2021-03-23

Family

ID=74883373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910888032.2A Pending CN112541960A (en) 2019-09-19 2019-09-19 Three-dimensional scene rendering method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN112541960A (en)
WO (1) WO2021052392A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436325A (en) * 2021-07-30 2021-09-24 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113963103A (en) * 2021-10-26 2022-01-21 中国银行股份有限公司 Rendering method of three-dimensional model and related device
WO2023280241A1 (en) * 2021-07-09 2023-01-12 花瓣云科技有限公司 Image picture rendering method and electronic device
CN116630516A (en) * 2023-06-09 2023-08-22 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN103785174A (en) * 2014-02-26 2014-05-14 北京智明星通科技有限公司 Method and system for displaying tens of thousands of people on same screen of game
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
WO2017213945A1 (en) * 2016-06-10 2017-12-14 Hexagon Technology Center Gmbh Systems and Methods for Accessing Visually Obscured Elements of a Three-Dimensional Model
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN103785174A (en) * 2014-02-26 2014-05-14 北京智明星通科技有限公司 Method and system for displaying tens of thousands of people on same screen of game
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
WO2017213945A1 (en) * 2016-06-10 2017-12-14 Hexagon Technology Center Gmbh Systems and Methods for Accessing Visually Obscured Elements of a Three-Dimensional Model
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280241A1 (en) * 2021-07-09 2023-01-12 花瓣云科技有限公司 Image picture rendering method and electronic device
CN113436325A (en) * 2021-07-30 2021-09-24 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113436325B (en) * 2021-07-30 2023-07-28 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113963103A (en) * 2021-10-26 2022-01-21 中国银行股份有限公司 Rendering method of three-dimensional model and related device
CN116630516A (en) * 2023-06-09 2023-08-22 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium
CN116630516B (en) * 2023-06-09 2024-01-30 广州三七极耀网络科技有限公司 3D characteristic-based 2D rendering ordering method, device, equipment and medium

Also Published As

Publication number Publication date
WO2021052392A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN112541960A (en) Three-dimensional scene rendering method and device and electronic equipment
US11941762B2 (en) System and method for augmented reality scenes
US20190230346A1 (en) 3D User Interface - 360-degree Visualization of 2D Webpage Content
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
CN112907760B (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN107038738A (en) Object is shown using modified rendering parameter
US8972901B2 (en) Fast cursor location
US10901612B2 (en) Alternate video summarization
US11562545B2 (en) Method and device for providing augmented reality, and computer program
CN107978018B (en) Method and device for constructing three-dimensional graph model, electronic equipment and storage medium
US9898842B2 (en) Method and system for generating data-efficient 2D plots
CN107491289B (en) Window rendering method and device
KR102174264B1 (en) Shadow rendering method and shadow rendering apparatus
US9043707B2 (en) Configurable viewcube controller
US20140325404A1 (en) Generating Screen Data
US20170031583A1 (en) Adaptive user interface
CN109766530B (en) Method and device for generating chart frame, storage medium and electronic equipment
US20200363929A1 (en) Text Editing System for 3D Environment
US20150310833A1 (en) Displaying Hardware Accelerated Video on X Window Systems
CN115599206A (en) Display control method, display control device, head-mounted display equipment and medium
CN114693780A (en) Image processing method, device, equipment, storage medium and program product
WO2013044417A1 (en) Displaying hardware accelerated video on x window systems
JP6945345B2 (en) Display device, display method and program
JP2016016319A (en) Game program for display-controlling objects arranged on virtual spatial plane
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination