WO2021052392A1 - 三维场景的渲染方法、装置及电子设备 - Google Patents

三维场景的渲染方法、装置及电子设备 Download PDF

Info

Publication number
WO2021052392A1
WO2021052392A1 PCT/CN2020/115761 CN2020115761W WO2021052392A1 WO 2021052392 A1 WO2021052392 A1 WO 2021052392A1 CN 2020115761 W CN2020115761 W CN 2020115761W WO 2021052392 A1 WO2021052392 A1 WO 2021052392A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional scene
rendering
area
setting data
graphics
Prior art date
Application number
PCT/CN2020/115761
Other languages
English (en)
French (fr)
Inventor
杜靖
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2021052392A1 publication Critical patent/WO2021052392A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of three-dimensional scene display, and more specifically, to a three-dimensional scene rendering method, a three-dimensional scene rendering device, an electronic device, and a computer-readable storage medium.
  • Modeling is the use of points, lines, surfaces, textures, materials and other elements to construct realistic objects and scenes, which is the basis for constructing three-dimensional scenes.
  • Rendering is to calculate and display the visual image of the model under factors such as viewpoint, light, and motion trajectory.
  • the rendering process of the above-mentioned three-dimensional scene will cause greater performance consumption of components such as CPU, GPU, memory, etc., thereby causing greater performance pressure on the device. Therefore, how to optimize the rendering process of a three-dimensional scene and reduce the consumption of equipment performance has become a problem that needs to be solved.
  • An objective of the embodiments of the present invention is to provide a new technical solution for rendering of a three-dimensional scene.
  • a method for rendering a three-dimensional scene including:
  • the setting the numerical value of the setting data of the three-dimensional scene corresponding to the region as a target value includes:
  • the setting data is depth data in a depth buffer, and the target value is such that the region cannot pass the depth test in the rendering display.
  • the test passing condition of the depth test is that the depth value to be measured is less than the current depth value in the depth buffer, and the setting data of the three-dimensional scene corresponding to the setting data of the region is a target Values include:
  • the value of the setting data corresponding to the region of the three-dimensional scene is set to be equal to the depth value corresponding to the near clipping plane.
  • the setting data is template data in the template buffer, and the target value makes the region unable to pass the template test in the rendering display.
  • the acquiring the area occluded by the upper layer graphics in the three-dimensional scene includes:
  • RGBA information of the pixels in the upper-layer graphics obtain the area occluded by the upper-layer graphics in the three-dimensional scene.
  • the RGBA information of the pixels in the upper-layer graphics includes the opacity of the pixels in the upper-layer graphics.
  • a method for rendering a three-dimensional scene including:
  • the method further includes:
  • the rendering of the interface graphics to the screen includes:
  • the interface graphics in the rendering target buffer are drawn to the screen.
  • said acquiring the area occluded by the interface graphics in the three-dimensional scene further includes:
  • the area occluded by the interface graphics in the three-dimensional scene is obtained.
  • a method for rendering a three-dimensional scene wherein the rendering is ray tracing rendering, and the method includes:
  • a method for rendering a three-dimensional scene including:
  • the next frame of the three-dimensional scene is rendered to the screen, and then the next frame of interface graphics is rendered to the screen.
  • the three-dimensional scene is a three-dimensional game scene
  • the user operations include any one or more of keyboard operations, mouse operations, touch screen operations, body sensing operations, and gravity sensing operations.
  • the three-dimensional scene is a three-dimensional game scene
  • the display refresh event includes expiration of a refresh time determined according to a set frame rate, receiving an externally triggered refresh instruction, and restoring the network connection with the server At least one item of.
  • a method for rendering a three-dimensional scene including:
  • the next frame of the three-dimensional scene is rendered to the screen, and then the next frame of interface graphics is rendered to the screen.
  • the three-dimensional scene is a three-dimensional game scene
  • the user operations include any one or more of keyboard operations, mouse operations, touch screen operations, body sensing operations, and gravity sensing operations.
  • a method for rendering a three-dimensional scene including;
  • the acquiring the invisible area of the three-dimensional scene includes any one or a combination of any of the following:
  • the acquiring the invisible area of the three-dimensional scene includes:
  • the acquiring the invisible area of the three-dimensional scene includes:
  • the invisible area is obtained.
  • a device for rendering a three-dimensional scene including:
  • An occlusion relationship acquisition module which is used to acquire the area occluded by upper-layer graphics in the three-dimensional scene
  • the data setting module is used to set the value of the setting data corresponding to the area of the three-dimensional scene as a target value, wherein the setting data is data representing the visibility of the content, and the target value makes the area Is omitted when rendering the three-dimensional scene;
  • the rendering module is configured to render the three-dimensional scene to the screen according to the setting data of the set three-dimensional scene.
  • an electronic device including the device described in the sixth aspect of the present invention; or, the device includes:
  • Memory used to store executable commands
  • the processor is configured to execute the method described in any one of the first aspect to the fifth aspect of the present invention under the control of the executable command.
  • a computer-readable storage medium storing executable commands.
  • the executable commands are executed by a processor, the execution is as described in the first to fifth aspects of the present invention. Methods.
  • the rendering method in the embodiment of the present invention first obtains the area occluded by the upper-layer graphics in the three-dimensional scene, and then sets the visibility data corresponding to the occluded area so that the area is omitted during rendering, thereby avoiding unnecessary overhead. Conducive to improving rendering performance.
  • Figure 1 is a schematic diagram of a terminal device displaying a three-dimensional scene
  • Figure 2 is a schematic diagram of the rendering and display process
  • FIG. 3 is a schematic diagram of a hardware configuration that can be used to implement a three-dimensional scene rendering method according to an embodiment of the present invention
  • FIG. 4 is a flowchart of a method for rendering a three-dimensional scene provided by an embodiment of the present invention
  • Figure 5 is a schematic diagram of the principle of depth testing
  • FIG. 6 is a flowchart of another method for rendering a three-dimensional scene according to an embodiment of the present invention.
  • Figure 7 is a schematic diagram of an example provided by an embodiment of the present invention.
  • Figure 8 is a schematic diagram of another example provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a three-dimensional scene rendering apparatus provided by an embodiment of the present invention.
  • Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present invention.
  • the 3D scene is blocked by other upper-layer graphics.
  • the upper layer of the three-dimensional scene also displays a dialog box about the exit operation.
  • a text with "player: Zhang San" as the content is displayed above the head of the character in the three-dimensional scene, and the text is always located on the upper layer of the three-dimensional scene.
  • the areas obscured by upper-layer graphics such as interface graphics and overhead text will not be displayed on the screen. If rendering these areas is avoided in advance, the corresponding hardware overhead can be reduced and the rendering performance can be improved.
  • test mechanism for selecting and rejecting fragments in the three-dimensional scene in the rendering and display.
  • depth testing, template testing, etc. can be performed in sequence, and the fragments that pass the test will undergo subsequent processing and finally be displayed on the screen.
  • the fragment will be discarded, that is, no further processing will be performed on it.
  • the above test is performed based on the buffer data corresponding to the three-dimensional scene, for example, based on the depth value in the depth buffer, the template value in the stencil buffer, and so on. By setting and modifying the values of these data, you can control whether the corresponding fragments are rendered to the screen.
  • the embodiment of the present invention provides a rendering method, which first obtains the area occluded by the upper layer graphics in the three-dimensional scene, and then sets the visibility data corresponding to the occluded area so that the area is omitted during rendering , Thereby avoiding unnecessary overhead and improving rendering performance.
  • Fig. 3 shows a schematic diagram of a hardware device that can be used to implement an embodiment of the present invention.
  • the hardware device is a terminal device 100, which includes a processor 110, a memory 120, a communication device 130, an interface device 140, an input device 150, and a display device 160.
  • the processor 110 may be a central processing unit (CPU), a microprocessor MCU, or the like.
  • the memory 120 includes, for example, ROM (Read Only Memory), RAM (Random Access Memory), nonvolatile memory such as a hard disk, and the like.
  • the communication device 130 can perform wired or wireless communication, for example.
  • the interface device 140 includes, for example, a USB interface, a headphone interface, and the like.
  • the input device 150 may include, for example, a touch screen, a keyboard, and the like.
  • the display device 160 may be used to display a three-dimensional scene, and includes, for example, a liquid crystal display screen, a touch display screen, and the like.
  • the display device 160 further includes a graphics card, and the graphics card includes a graphics processing unit (GPU), which is specifically used for graphics rendering.
  • GPU graphics processing unit
  • the three-dimensional scene rendering method provided by the embodiment of the present invention may be executed by the processor 110 or may be executed by the GPU, which is not limited.
  • the terminal device 100 may be any device that supports the display of a three-dimensional scene, such as a smart phone, a portable computer, a desktop computer, and a tablet computer.
  • the memory 120 of the terminal device 100 is used to store instructions, which are used to control the processor 110 or GPU to operate to support the implementation of the three-dimensional scene rendering method according to any embodiment of the present invention.
  • Those skilled in the art can design instructions according to the solutions disclosed in the present invention. Regarding how instructions control the processor to operate, it is well known in the art, so it will not be described in detail here.
  • the terminal device 100 in the embodiment of the present invention may only involve some of the devices, for example, only the processor 110 and the memory are involved. 120.
  • FIG. 3 The hardware device shown in FIG. 3 is only explanatory, and is by no means intended to limit the present invention, its application or use.
  • This embodiment provides a method for rendering a three-dimensional scene, and the method is implemented by, for example, the terminal device 100 in FIG. 3. As shown in Figure 4, the method includes the following steps S1100-S1300:
  • Step S1100 Obtain an area occluded by the upper-layer graphics in the three-dimensional scene.
  • the three-dimensional scene in this embodiment is a three-dimensional graphic displayed through three-dimensional technology, such as a three-dimensional picture in a game, or a three-dimensional picture in a merchandise display.
  • the display processing of the 3D scene is divided into two stages: modeling and rendering.
  • Modeling is to establish a 3D model that contains key parameters of the object
  • rendering is to add effects such as perspective, light and shadow, and texture to the established 3D model. And displayed on the screen.
  • the upper-layer graphics are graphics that are displayed on the three-dimensional scene and will occlude the three-dimensional scene, such as interface graphics in the user interface, characters on the head of a character in a three-dimensional game, and "small windows" used for video playback. Wait.
  • the upper layer graphics in this embodiment may be two-dimensional graphics or three-dimensional graphics, which is not limited here.
  • the upper-layer graphics occludes the three-dimensional scene, so that the occluded area becomes invisible or almost invisible when finally displayed on the screen.
  • the above-mentioned specific implementation for obtaining the area occluded by the upper-layer graphics in the three-dimensional scene is: obtaining the area occluded by the upper-layer graphics in the three-dimensional scene according to the RGBA information of the pixels in the upper-layer graphics.
  • Pixels are the smallest indivisible units or elements in an image.
  • Each pixel in the upper layer graphics has corresponding RGBA information, where R represents red (Red), G represents green (Green), B represents blue (Blue), and A represents opacity (Alpha).
  • the upper-layer graphics is always above the three-dimensional scene, it can be determined whether the three-dimensional scene corresponding to the pixel position is visible according to the RGBA information of the pixels in the upper-layer graphics, that is, the occlusion relationship between the upper-layer graphics and the three-dimensional scene can be obtained.
  • the area occluded by the upper-layer graphics in the three-dimensional scene can be obtained according to the opacity of the pixels in the upper-layer graphics.
  • the opacity of a pixel can reflect the degree of transparency or visibility of the pixel itself.
  • the 3D scene below can be displayed through the pixels, so the pixels will not block the 3D scene.
  • the lower 3D scene cannot pass through the pixel, so the pixel will block the 3D scene.
  • the opacity of the pixel to the three-dimensional scene can be determined according to the specific degree of opacity of the pixel.
  • the value range of pixel opacity is 0%-100%. Wherein, an opacity of 0% indicates that the pixel is completely transparent, an opacity of 100% indicates that the pixel is completely opaque, and an opacity between the two indicates that the pixel is semi-transparent.
  • the area occluded by the upper-layer graphics in the three-dimensional scene can also be obtained according to the RGB color information of the pixels in the upper-layer graphics. For example, for a pixel whose RGB color is pure black in the upper-layer graphics, it is considered that the pixel has caused an obstruction to the lower-layer three-dimensional scene.
  • the specific judgment standard can be determined according to the actual situation, and there is no restriction here.
  • step S1200 After obtaining the area occluded by the upper-layer graphics in the 3D scene, the following step S1200 is performed:
  • Step S1200 setting the value of the setting data of the three-dimensional scene corresponding to the occluded area as the target value, where the setting data is data representing the visibility of the content, and the target value makes the area omitted when rendering the three-dimensional scene .
  • the setting data is data indicating the visibility of the content.
  • the three-dimensional scene area corresponding to the setting data is visible, that is, the three-dimensional scene area corresponding to the setting data is rendered to the screen.
  • the 3D scene area corresponding to the setting data is not visible, that is, the 3D scene area corresponding to the setting data will not be rendered to the screen.
  • the setting data is, for example, depth data in the depth buffer, template data in the stencil buffer, data representing content visibility in ray tracing rendering, and the like.
  • the target value of the data is set so that the occluded area in the three-dimensional scene is omitted during rendering.
  • the target value is the above-mentioned first value.
  • setting the value of the setting data corresponding to the occluded area of the three-dimensional scene as the target value includes: obtaining pixels corresponding to the occluded area; in the buffer for storing the setting data of the three-dimensional scene, Find the value corresponding to each pixel; modify each value found as the target value.
  • the area occluded by the upper-layer graphics in the three-dimensional scene corresponds to a plurality of pixels, and for each of the pixels, the three-dimensional scene is occluded by the upper-layer graphics at the pixel position.
  • a pixel-by-pixel processing method can be adopted, that is, each pixel in the 3D scene is traversed to determine the occlusion between the 3D scene and the upper-layer graphics corresponding to each pixel position, The pixels corresponding to the three-dimensional scene occluded by the upper-layer graphics are assembled, and the area occluded by the upper-layer graphics in the three-dimensional scene is obtained.
  • the pixels corresponding to the blocked area are obtained at the same time.
  • the buffer for storing the setting data of the three-dimensional scene is a special area in the memory for storing the setting data, for example, includes a depth buffer, a stencil buffer, and so on.
  • the setting data in step S1200 is depth data
  • the target value in step S1200 makes the occluded area unable to pass the depth test in the rendering display, that is, to avoid rendering the occluded area based on the depth test.
  • the depth test is a process of choosing the display content based on the depth data, which can solve the problem of visibility in a specific viewing angle caused by mutual occlusion between three-dimensional models.
  • FIG. 5 from left to right in Fig. 5 are the camera (virtual in the rendering display), the P1 plane, and the P2 plane, where both the P1 plane and the P2 plane are models in the three-dimensional scene.
  • A1 and A2 are points located on the P1 plane and the P2 plane, respectively, and both the A1 point and the A2 point are imaged on the pixel A in the camera.
  • the distance from point A1 to the camera is the depth Z1 of point A1
  • the distance from point A2 to the camera is the depth Z2 of point A2. It is easy to understand that point A1 is closer to the camera than point A2, that is, Z1 is smaller than Z2. Therefore, point A1 will block point A2 during imaging, so that point A2 will not be displayed at pixel A.
  • the specific process of the depth test is, for example, as follows: first, the depth data corresponding to the pixel A in the depth buffer is initialized, for example, set to the depth value Max of the far clipping plane.
  • the far clipping plane here is the farthest drawing position relative to the camera.
  • the near clipping plane is the closest drawing position relative to the camera.
  • the depth test is applied to the occlusion display of the three-dimensional scene and upper-layer graphics.
  • the value of the depth data in the depth buffer corresponding to the occluded area is set to a target value that makes the occluded area fail the depth test.
  • the value of the depth data in the depth buffer corresponding to the occluded area can be set based on the pixels corresponding to the occluded area.
  • the occluded area includes three pixels of X, Y, and Z.
  • the target value is a value that makes the occluded area fail the depth test, that is, the target value makes each pixel corresponding to the occluded area fail the depth test.
  • the target value is selected as 0, that is, the depth data in the depth buffer corresponding to each pixel in the occluded area
  • the initial value is set to 0.
  • the depth data is normalized here, the depth of the near clipping plane is 0, and the depth of the far clipping plane is 1.
  • the depth value of each point in the three-dimensional scene is greater than or equal to 0, for each pixel corresponding to the occluded area, the depth value of the point on the corresponding three-dimensional model does not meet the passing condition that the depth value is less than 0, and cannot be Draw on the screen, that is, the pixel cannot pass the depth test.
  • the target value can be determined according to specific test passing conditions, which is not limited here.
  • the depth test can effectively avoid the area occluded by the upper layer graphics in the rendered 3D scene, and the performance consumption is low.
  • the setting data in step S1200 is the template data in the template buffer, and the target value in step S1200 makes the occluded area unable to pass the stencil test in the rendering display, that is, to avoid rendering the occluded area based on the template test .
  • the template test is a process of choosing the display content based on the template data.
  • the implementation process is similar to the depth test. It also compares the template value of the point on the 3D model with the current template value in the template buffer. The points that pass the test are drawn to the corresponding pixels.
  • Template testing can be used to achieve effects such as displaying the borders of objects in a three-dimensional scene. For example, the template value corresponding to the border area of the object is set to 1 in advance, and the template value corresponding to the non-frame area is set to other values. Set the pass condition of the template test to the template value to be tested equal to the current template value in the template buffer, and set the initial template value in the template buffer to 1. It is easy to understand that only the border area passes the template test and is drawn on the screen.
  • the mask test is applied to the occlusion display of the three-dimensional scene and upper-layer graphics.
  • the value of the template data in the stencil buffer corresponding to the occluded area is set to a target value that makes the occluded area fail the template test.
  • the template value of the three-dimensional scene is greater than 0, and the template test pass condition is that the template value to be tested is equal to the current template value in the template buffer, the initial value of the template data in the template buffer corresponding to the occluded area Set to -1. In this way, the occluded area cannot pass the stencil test and will not be rendered to the screen.
  • the template-based test can effectively avoid rendering the occluded area, and the performance consumption is low.
  • the setting data can be set as the target value based on any of the depth test and the mask test, or the setting data can be set as the target value based on both tests at the same time, which is not limited here.
  • step S1300 the three-dimensional scene is rendered to the screen according to the setting data of the set three-dimensional scene.
  • the three-dimensional scene is rendered according to the setting data of the set three-dimensional scene. It is easy to understand that the area in the 3D scene that is blocked by the upper-layer graphics is omitted during rendering, and the area that is not blocked by the upper-layer graphics is rendered to the screen in a normal manner.
  • the specific situation of rendering please refer to the depth test and mask test process involved in the description of the target value above, which will not be repeated here.
  • the rendering method in this embodiment first obtains the area occluded by the upper-layer graphics in the three-dimensional scene, and then sets the visibility data corresponding to the occluded area so that the area is omitted during rendering, thereby avoiding unnecessary overhead, which is beneficial to Improve rendering performance.
  • This embodiment also provides another three-dimensional scene rendering method, which is suitable for the situation where the interface graphics and the three-dimensional scene are displayed together, and can also be implemented by the terminal device 100 shown in FIG. 3. As shown in Figure 6, the method includes the following steps S2100-S2300:
  • Step S2100 Obtain the area occluded by the interface graphics in the three-dimensional scene.
  • Step S2200 Set the value of the setting data of the three-dimensional scene corresponding to the occluded area as the target value, where the setting data is data representing the visibility of the content, and the target value causes the area to be omitted when rendering the three-dimensional scene .
  • step S2300 according to the setting data of the set three-dimensional scene, the three-dimensional scene is rendered to the screen, and the interface graphics are rendered to the screen.
  • steps S2100-S2300 For specific implementations of steps S2100-S2300, refer to the above description of steps S1100-S1300.
  • the interface graphics in this embodiment are the graphics of the interactive interface in human-computer interaction.
  • the interface graphics are displayed above the 3D scene.
  • the three-dimensional graphics rendering method provided in this embodiment further includes: drawing the interface graphics to the rendering target buffer.
  • the step of rendering the interface graphics to the screen in step S2300 includes: rendering the interface graphics in the rendering target buffer to the screen.
  • the rendering target buffer is a memory area in the rendering display that can be used to store the drawn graphics.
  • the interface graphics can also be drawn directly on the screen, which is not limited here.
  • step S2100 obtains the area occluded by the interface graphics in the three-dimensional scene, including: obtaining the RGBA information of the pixels in the interface graphics according to the interface graphics in the rendering target buffer; according to the RGBA information, obtaining the interface graphics in the three-dimensional scene Blocked area. That is, in this example, the RGBA information of the interface graphics is obtained based on the interface graphics drawn in the rendering target buffer. In this way, the RGBA information of the finally displayed interface graphics can be easily obtained.
  • FIG. 7 is an example to illustrate the implementation process of the three-dimensional scene rendering method in this embodiment.
  • an interface graphic 7-2 related to the exit operation is displayed on the three-dimensional scene 7-1, and the final display effect is as shown in 7-5 in FIG. 7.
  • the graphic in Figure 7 is only used to demonstrate the rendering process and does not mean that the relevant graphic has been displayed on the screen.
  • the interface graphic 7-2 is rectangular.
  • the text prompt of "Exit” and the selection buttons corresponding to "Yes” and “No” are displayed, and the opacity of all pixels corresponding to the interface graphic 7-2 is 100%, that is, all the pixels corresponding to the interface graphic All are opaque and will occlude the 3D scene below. According to this, the area 7-3 occluded by the interface graphics 7-2 in the three-dimensional scene can be obtained.
  • Modify the value of the setting data corresponding to the area 7-3 in the three-dimensional scene for example, modify the depth value in the depth buffer corresponding to the area 7-3 in the three-dimensional scene 7-1 to 0, so that the corresponding The depth value of the point on the 3D model does not satisfy the pass condition that the depth value is less than 0, and it cannot be drawn on the screen.
  • This embodiment also provides another method for rendering a three-dimensional scene.
  • the rendering in this method is ray tracing rendering.
  • This method can also be implemented by the terminal device 100 shown in FIG. 3 and includes the following steps S3100-S3300:
  • Step S3100 Obtain the area occluded by the upper-layer graphics in the three-dimensional scene.
  • the upper-level graphics can be, for example, interface graphics, or text on the head of a character in a game application, etc., which is not limited here.
  • Step S3200 setting the value of the setting data of the three-dimensional scene corresponding to the occluded area as the target value, where the setting data is the data representing the visibility of the content in the ray tracing rendering, and the target value makes the area in the three-dimensional scene. It is omitted when rendering.
  • step S3300 according to the setting data of the set three-dimensional scene, the three-dimensional scene is rendered to the screen, and the interface graphics are rendered to the screen.
  • steps S3100-S3300 For specific implementations of steps S3100-S3300, refer to the above description of steps S1100-S1300.
  • ray tracing rendering is a method for rendering a three-dimensional scene.
  • the setting data is data representing the visibility of the content in the ray tracing rendering, and it can be determined whether to perform ray tracing processing on the pixel according to the setting data.
  • the setting data can take the value 0 or 1, the setting data corresponding to a certain pixel is 1, which means that the ray tracing process is performed on the pixel, and the setting data corresponding to a certain pixel is 0, which means No ray tracing processing is performed on this pixel.
  • the target value of the area occluded by the upper-layer graphics is 0, and the target value of the area that is not occluded is 1. In this way, it is possible to avoid rendering the occluded area in the ray tracing rendering, thereby improving the rendering performance of the ray tracing rendering.
  • This embodiment also provides another method for rendering a three-dimensional scene, for example, implemented by the terminal device 100 shown in FIG. 3, and the method includes the following steps S4100-S4400:
  • Step S4100 in response to the set display refresh event, load the data of the next frame of the three-dimensional scene that matches the user operation.
  • the set display refresh event may include, for example, the refresh time determined according to the set frame rate is reached, the refresh instruction is received externally triggered, And restore at least one of the network connection with the server.
  • the frame rate is set to 60 frames per second, that is, to refresh 60 frames per second
  • the refresh time determined according to the frame rate expires, the data of the next frame of 3D scene that matches the user operation will be loaded. Perform rendering to achieve refresh display.
  • the terminal device 100 when the user triggers a refresh instruction through the refresh button set on the game interface, the terminal device 100 will also respond to the refresh instruction by loading the next frame of 3D scene data matching the user operation for rendering, so as to achieve refresh display.
  • the terminal device 100 After the terminal device 100 is disconnected from the server due to a network abnormality, the terminal device 100 will obtain the latest basic data from the server after resuming the network connection with the server. At this time, it will record the connection with the user based on the basic data. The data of the next frame of 3D scene that matches the operation is rendered to achieve refresh display.
  • the three-dimensional scene in this embodiment is a three-dimensional game scene
  • user operations include any one or more of keyboard operations, mouse operations, touch screen operations, body sensing operations, and gravity sensing operations.
  • a specific user operation will cause the next frame of the three-dimensional scene and/or the next frame of interface graphics to change.
  • the next frame of the three-dimensional scene and/or the next frame of interface graphics will change.
  • the next frame of the three-dimensional scene will be related to the moving position and direction of the game character, that is, match the user operation.
  • Step S4200 according to the loaded data, obtain the area in the next frame of the three-dimensional scene that is blocked by the next frame of interface graphics.
  • Step S4300 Set the value of the setting data corresponding to the area of the next frame of the three-dimensional scene as the target value, where the setting data is data representing the visibility of the content, and the target value makes the area render the next frame of the three-dimensional scene The time is omitted.
  • Step S4400 Render the next frame of the 3D scene to the screen according to the setting data of the next frame of the 3D scene after setting, and then render the next frame of interface graphics to the screen.
  • step S4400 the refresh of the display content of the next frame will be completed.
  • steps S4200-S4400 please refer to the above description of steps S1100-S1300.
  • This embodiment also provides another method for rendering a three-dimensional scene, for example, implemented by the terminal device 100 shown in FIG. 3, and the method includes the following steps S5100-S5400:
  • Step S5100 in response to the set display refresh event, obtain the next frame of interface graphics matching the user operation.
  • the three-dimensional scene in this embodiment is a three-dimensional game scene
  • user operations include any one or more of keyboard operations, mouse operations, touch screen operations, body sensing operations, and gravity sensing operations.
  • a specific user operation corresponds to a specific interface graphic.
  • the exit interface shown in 8-2 corresponds to the operation of the user tapping the button in 8-1.
  • the terminal device in response to a user operation, obtains interface graphic information corresponding to the user operation according to a preset corresponding relationship, for example, obtains the content, position, size, opacity and other parameters of the interface graphic.
  • Step S5200 Obtain an area in the next frame of the three-dimensional scene that is blocked by the next frame of interface graphics.
  • the next frame of the three-dimensional scene will also match the user operation.
  • Step S5300 Set the value of the setting data corresponding to the area of the next frame of the three-dimensional scene as the target value, where the setting data is data representing the visibility of the content, and the target value makes the area in the next frame of the three-dimensional scene. It is omitted when rendering.
  • Step S5400 Render the next frame of the 3D scene to the screen according to the setting data of the next frame of the 3D scene after setting, and then render the next frame of interface graphics to the screen.
  • the mobile terminal 100 before performing the display refresh, if the exit operation triggered by the user taps the button, then during the display refresh, the mobile terminal 100 obtains information related to the operation according to the detected user operation of tapping the button.
  • the corresponding interface graphics are used as the next frame of interface graphics.
  • this operation corresponds to the exit interface shown in 8-2, and when the next frame is rendered, obtain the area that is blocked by the exit interface in the next frame of the 3D scene to set the next
  • the value of the setting data of one frame of the three-dimensional scene corresponding to the area is the target value, so that the area is omitted when the next frame of the three-dimensional scene is rendered.
  • a display screen of 8-3 is obtained after completing the rendering and display of the next frame of the three-dimensional scene and the next frame of interface graphics according to step S5400.
  • steps S5200-S5400 please refer to the above description of steps S1100-S1300.
  • This embodiment also provides another method for rendering a three-dimensional scene, for example, implemented by the terminal device 100 shown in FIG. 3, and the method includes the following steps S6100-S6300;
  • Step S6100 Obtain an invisible area of the three-dimensional scene.
  • obtaining the invisible area of the three-dimensional scene includes any one or a combination of any of the following: obtaining the area in the three-dimensional scene that is obscured by the upper-layer graphics; obtaining the corresponding object in the three-dimensional scene whose imaging size is less than a preset threshold Area: Obtain the non-interesting area of the subject in the current perspective in the 3D scene.
  • step S1100 to obtain the area occluded by the upper-layer graphics in the three-dimensional scene, refer to the description of step S1100 above.
  • the imaging size is the size of the image formed by the objects in the three-dimensional scene on the screen.
  • the imaging size of the object can be obtained according to the distance from the object to the screen, the size of the object itself, and other information.
  • the preset threshold is a preset imaging size. When the imaging size of the object is less than the preset threshold, the image size corresponding to the object is significantly smaller than the image size corresponding to the entire three-dimensional scene, and the area corresponding to the object can be considered as an invisible area. In 3D rendering, there is no need to perform rendering calculations for the regions corresponding to such objects, and processing methods such as directly displaying the background can be adopted.
  • the current perspective subject is, for example, a person who observes a three-dimensional scene from the main perspective.
  • their areas of interest (or areas of interest) when observing a three-dimensional scene are different.
  • the areas corresponding to flowers and butterflies can be regarded as non-attention areas of male characters.
  • the non-focused area of the main body of the current view does not need to be rendered, and processing methods such as directly displaying the background can be adopted.
  • acquiring the invisible area of the three-dimensional scene includes: providing a setting portal for setting the invisible area; acquiring the invisible area input through the setting portal.
  • the user can customize the invisible area.
  • the user can cancel the particle effect in the screen, that is, set the area corresponding to the particles in the 3D scene as an invisible area.
  • the invisible area can be directly obtained.
  • obtaining the invisible area of the three-dimensional scene includes: providing a setting portal for setting the visible area in the three-dimensional scene; obtaining the visible area input through the setting portal; and obtaining the invisible area according to the visible area.
  • the user can customize the visible area, and the parts outside the visible area are all invisible areas, that is, the user reversely sets the invisible area.
  • Step S6200 Set the value of the setting data corresponding to the invisible area of the three-dimensional scene as the target value, where the setting data is data representing the visibility of the content, and the target value makes the invisible area omitted when rendering the three-dimensional scene .
  • step S6300 the three-dimensional scene is rendered to the screen according to the setting data of the set three-dimensional scene.
  • steps S6200-S6300 please refer to the above description of steps S1200-S1300.
  • This embodiment provides a three-dimensional scene rendering apparatus.
  • the apparatus is, for example, the three-dimensional scene rendering apparatus 700 shown in FIG. 9 and includes an occlusion relationship acquisition module 710, a data setting module 720, and a rendering module 730.
  • the occlusion relationship obtaining module 710 may be used to obtain the area occluded by the upper layer graphics in the three-dimensional scene.
  • the upper-level graphics are, for example, interface graphics, such as human-computer interaction interface graphics.
  • the data setting module 720 can be used to set the value of the setting data corresponding to the area of the three-dimensional scene as the target value, where the setting data is data representing the visibility of the content, and the target value makes the area be used when the three-dimensional scene is rendered. Omitted.
  • the rendering module 730 may be used to render the three-dimensional scene to the screen according to the setting data of the set three-dimensional scene.
  • the data setting module 720 when the data setting module 720 sets the value of the setting data corresponding to the area of the three-dimensional scene as the target value, it can be used to: obtain the pixels corresponding to the area; in the buffer for storing the setting data in the three-dimensional scene In the area, find the value corresponding to each pixel; modify each value found as the target value.
  • the setting data is the depth data in the depth buffer, and the target value makes the area unable to pass the depth test in the rendering display.
  • the test passing condition of the depth test is that the depth value to be measured is less than the current depth value in the depth buffer
  • the data setting module 720 may set the value of the setting data corresponding to the area of the three-dimensional scene as the target value. Used to: set the value of the setting data corresponding to the area of the three-dimensional scene to be equal to the depth value corresponding to the near clipping plane.
  • the setting data is the template data in the template buffer, and the target value makes the area unable to pass the template test in the rendering display.
  • the occlusion relationship acquisition module 710 may be used to obtain the area occluded by the upper-layer graphics in the three-dimensional scene according to the RGBA information of the pixels in the upper-layer graphics when acquiring the area occluded by the upper-layer graphics in the three-dimensional scene.
  • the RGBA information of the pixels in the upper-layer graphics includes the opacity of the pixels in the upper-layer graphics.
  • the occlusion relationship obtaining module 710 may be used to obtain the area in the three-dimensional scene that is obscured by the interface graphics; the data setting module 720 may be used to set the value of the setting data corresponding to the area of the three-dimensional scene as the target
  • the setting data is data representing the visibility of the content, and the target value is such that the region is omitted when rendering the 3D scene; and the rendering module 730 can be used to set the 3D scene according to the setting Data, the three-dimensional scene is rendered to the screen, and the interface graphics are also rendered to the screen.
  • the 3D scene can be rendered to the screen first, and then the interface graphics can be rendered to the screen.
  • the 3D scene rendering apparatus may further include a drawing module, which is used to draw the interface graphics to the rendering target buffer, so that the rendering module 730 can render the interface graphics to the screen when the rendering target The interface graphics in the buffer are drawn to the screen.
  • a drawing module which is used to draw the interface graphics to the rendering target buffer, so that the rendering module 730 can render the interface graphics to the screen when the rendering target The interface graphics in the buffer are drawn to the screen.
  • the occlusion relationship acquisition module 710 when the occlusion relationship acquisition module 710 acquires the area occluded by the interface graphics in the three-dimensional scene, it can be used to: obtain the RGBA information of the pixels in the interface graphics according to the interface graphics in the rendering target buffer; and, according to The RGBA information obtains the area occluded by the interface graphics in the three-dimensional scene.
  • the rendering is ray tracing rendering
  • the occlusion relationship acquisition module 710 can be used to acquire the area occluded by the upper layer graphics in the three-dimensional scene
  • the data setting module 720 can be used to set the area corresponding to the three-dimensional scene.
  • the numerical value of the setting data is the target value, where the setting data is the data representing the visibility of the content in the ray tracing rendering, and the target value causes the region to be omitted when the 3D scene is rendered
  • the rendering module 730 can be used for Render the three-dimensional scene to the screen according to the setting data of the set three-dimensional scene.
  • This embodiment is an electronic device that includes the three-dimensional scene rendering device in the device embodiment of the present invention; or, the electronic device is the electronic device 800 shown in FIG. 10, and includes:
  • the memory 810 is used to store executable commands.
  • the processor 820 is configured to execute the image processing method in the first embodiment under the control of the executable command stored in the memory 810.
  • the electronic device may be any terminal device, such as a PC, a notebook computer, a tablet computer, a mobile phone, a head-mounted device, and other terminal devices with a display device, which are not limited here.
  • a terminal device such as a PC, a notebook computer, a tablet computer, a mobile phone, a head-mounted device, and other terminal devices with a display device, which are not limited here.
  • This embodiment provides a computer-readable storage medium storing an executable command, and when the executable command is executed by a processor, the method described in the method embodiment of the present invention is executed.
  • the present invention may be a system, a method and/or a computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present invention.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet). connection).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • the computer-readable program instructions are executed to realize various aspects of the present invention.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
  • Executable instructions may also occur in a different order than the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种三维场景的渲染方法、装置及电子设备。该方法包括:获取三维场景中被上层图形遮挡的区域(S1100);设置三维场景的对应于区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得区域在进行三维场景的渲染时被省略(S1200);根据设置后的三维场景的设定数据,将三维场景渲染至屏幕(S1300)。

Description

三维场景的渲染方法、装置及电子设备
本申请要求2019年09月19日递交的申请号为201910888032.2、发明名称为“三维场景的渲染方法、装置及电子设备”中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及三维场景显示领域,更具体地,涉及一种三维场景的渲染方法、一种三维场景的渲染装置、一种电子设备以及一种计算机可读存储介质。
背景技术
三维场景的构建显示通常可以分为建模和渲染两个阶段。建模是用点、线、面、贴图、材质等元素构建逼真的物体和场景,是构建三维场景的基础。渲染是把模型在视点、光线、运动轨迹等因素下的视觉画面计算并显示出来。
上述三维场景的渲染过程会对CPU、GPU、内存等部件造成较大的性能消耗,从而给设备带来较大的性能压力。因此,如何优化三维场景的渲染过程,降低对设备性能的消耗,就成为了需要解决的问题。
发明内容
本发明实施例的一个目的是提供一种三维场景渲染的新的技术方案。
根据本发明的第一方面,提供一种三维场景的渲染方法,包括:
获取所述三维场景中被上层图形遮挡的区域;
设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
可选地,所述设置所述三维场景的对应于所述区域的设定数据的数值为目标值,包括:
获取所述区域对应的像素;
在所述三维场景的用于存储所述设定数据的缓冲区中,查找对应每一所述像素的所述数值;
修改查找到的每一所述数值为所述目标值。
可选地,所述设定数据为深度缓冲区中的深度数据,所述目标值使得所述区域不能通过渲染显示中的深度测试。
可选地,所述深度测试的测试通过条件为待测深度值小于所述深度缓冲区中的当前深度值,所述设置所述三维场景的对应于所述区域的设定数据的数值为目标值,包括:
设置所述三维场景的对应于所述区域的设定数据的数值等于近裁剪面对应的深度值。
可选地,所述设定数据为模板缓冲区中的模板数据,所述目标值使得所述区域不能通过渲染显示中的模板测试。
可选地,所述获取所述三维场景中被上层图形遮挡的区域,包括:
根据所述上层图形中像素的RGBA信息,获得所述三维场景中被上层图形遮挡的区域。
可选地,所述上层图形中像素的RGBA信息包括所述上层图形中像素的不透明度。
根据本发明的第二方面,还提供了一种三维场景的渲染方法,包括:
获取所述三维场景中被界面图形遮挡的区域;
设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕,及将所述界面图形渲染至所述屏幕。
可选地,所述方法还包括:
将所述界面图形绘制至渲染目标缓冲区;
所述将所述界面图形渲染至屏幕,包括:
将所述渲染目标缓冲区中的界面图形绘制至屏幕。
可选地,所述获取所述三维场景中被界面图形遮挡的区域,还包括:
根据所述渲染目标缓冲区中的界面图形,获得所述界面图形中像素的RGBA信息;
根据所述RGBA信息,获得所述三维场景中被界面图形遮挡的区域。
根据本发明的第三方面,还提供了一种三维场景的渲染方法,其中,所述渲染为光线追踪渲染,所述方法包括:
获取所述三维场景中被上层图形遮挡的区域;
设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为光线追踪渲染中表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
根据本发明的第四方面,还提供了一种三维场景的渲染方法,包括:
响应于设定的显示刷新事件,加载与用户操作相匹配的下一帧三维场景的数据;
根据所述数据,获取所述下一帧三维场景中被下一帧界面图形遮挡的区域;
设置所述下一帧三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述下一帧三维场景的渲染时被省略;
根据设置后的所述下一帧三维场景的所述设定数据,将所述下一帧三维场景渲染至屏幕,再将所述下一帧界面图形渲染至所述屏幕。
可选地,所述三维场景为三维游戏场景,所述用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
可选地,所述三维场景为三维游戏场景,所述显示刷新事件包括根据设定帧率所确定的刷新时间到时、接收到外部触发的刷新指令、及恢复与服务器之间的网络连接中的至少一项。
根据本发明的第五方面,还提供了一种三维场景的渲染方法,包括:
响应于设定的显示刷新事件,加载与用户操作相匹配的下一帧界面图形的数据;
根据所述数据,获取下一帧三维场景中被所述下一帧界面图形遮挡的区域;
设置所述下一帧三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述下一帧三维场景的渲染时被省略;
根据设置后的所述下一帧三维场景的所述设定数据,将所述下一帧三维场景渲染至屏幕,再将所述下一帧界面图形渲染至所述屏幕。
可选地,所述三维场景为三维游戏场景,所述用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
根据本发明的第六方面,还提供了一种三维场景的渲染方法,包括;
获取所述三维场景的不可见区域;
设置所述三维场景的对应于所述不可见区域的设定数据的数值为目标值,其中,所 述设定数据为表示内容可见性的数据,所述目标值使得所述不可见区域在进行所述三维场景的渲染时被省略;
根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
可选地,所述获取所述三维场景的不可见区域,包括以下任意一项或任意多项的组合:
获取所述三维场景中被上层图形遮挡的区域;
获取所述三维场景中成像尺寸小于预设阈值的物体所对应区域;
获取所述三维场景中当前视角主体的非关注区域。
可选地,所述获取所述三维场景的不可见区域,包括:
提供设置所述不可见区域的设置入口;
获取通过所述设置入口输入的所述不可见区域。
可选地,所述获取所述三维场景的不可见区域,包括:
提供设置所述三维场景中可见区域的设置入口;
获取通过所述设置入口输入的所述可见区域;
根据所述可见区域,获得所述不可见区域。
根据本发明的第七方面,还提供了一种三维场景的渲染装置,包括:
遮挡关系获取模块,用于获取所述三维场景中被上层图形遮挡的区域;
数据设置模块,用于设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
渲染模块,用于根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
根据本发明的第八方面,还提供了一种电子设备,包括本发明第六方面描述的装置;或者,所述设备包括:
存储器,用于存储可执行命令;
处理器,用于在所述可执行命令的控制下,执行如本发明的第一方面至第五方面中任一方面描述的方法。
根据本发明的第九方面,还提供了一种计算机可读存储介质,存储有可执行命令,所述可执行命令被处理器执行时,执行如本发明的第一方面至第五方面中描述的方法。
本发明实施例中的渲染方法,先获取三维场景中被上层图形遮挡的区域,再对被遮 挡区域对应的可见性数据进行设置,使得该区域在渲染时被省略,从而避免不必要的开销,有利于提高渲染性能。
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。
附图说明
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。
图1是终端设备显示三维场景的示意图;
图2是渲染显示过程的示意图;
图3是可用于实现本发明实施例的三维场景渲染方法的硬件配置的示意图;
图4是本发明实施例提供的三维场景渲染方法的流程图;
图5是深度测试原理的示意图;
图6是本发明实施例提供的另一种三维场景渲染方法的流程图;
图7是本发明实施例提供的一个例子的示意图;
图8是本发明实施例提供的另一个例子的示意图;
图9是本发明实施例提供的三维场景渲染装置的示意图;
图10是本发明实施例提供的电子设备的示意图。
具体实施方式
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。
对于相关领域普通技术人物已知的技术、方法和设备可能不作详细讨论,但在适当情况下,技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
<总体构思>
在三维场景的渲染显示中,会存在三维场景被其他上层图形遮挡的情形。例如,图1中通过终端设备显示三维场景时,该三维场景的上层还显示有关于退出操作的对话框。此外,该三维场景中人物的头顶上方还显示有以“玩家:张三”为内容的文字,该文字也始终位于三维场景的上层。上述三维场景中被界面图形、头顶文字等上层图形遮挡的区域最终不会显示在屏幕上,如果事先避免对这些区域进行渲染,可以减少相应的硬件开销,提高渲染性能。
另外,在渲染显示中存在对三维场景中的片元进行取舍的测试机制。如图2所示,对于三维场景中的片元,可以依次进行深度测试、模板测试等,通过测试的片元才会经过后续处理最终显示在屏幕上。对于没有经过某一项测试的片元,该片元会被舍弃,即不再对其进行后续处理。上述测试是基于三维场景对应的缓冲区数据进行的,例如基于深度缓冲区中的深度值、模板缓冲区中的模板值等。通过对这些数据的数值进行设置和修改,可以控制相应片元是否被渲染至屏幕。
基于上述两方面的内容,本发明实施例提供一种渲染方法,先获取三维场景中被上层图形遮挡的区域,再对被遮挡区域对应的可见性数据进行设置,使得该区域在渲染时被省略,从而避免不必要的开销,提高渲染性能。
<硬件配置>
图3示出了可用于实现本发明实施例的硬件设备的示意图。
如图3所示,该硬件设备为终端设备100,包括处理器110、存储器120、通信装置130、接口装置140、输入装置150、显示装置160。
处理器110可以是中央处理器CPU、微处理器MCU等。存储器120例如包括ROM(只读存储器)、RAM(随机存取存储器)、诸如硬盘的非易失性存储器等。通信装置130例如能够进行有线或无线通信。接口装置140例如包括USB接口、耳机接口等。输入装置150例如可以包括触摸屏、键盘等。显示装置160可以用于显示三维场景,例如包括液晶显示屏、触摸显示屏等。
本实施例中,显示装置160还包括显卡,该显卡包括图形处理器(GPU),该图形处 理器专门用于图形渲染。
本发明实施例提供的三维场景渲染方法,可以由处理器110执行,也可以由GPU执行,对此不做限定。
终端设备100可以是智能手机、便携式电脑、台式计算机、平板电脑等支持三维场景显示的任意设备。
应用于本发明的实施例中,终端设备100的存储器120用于存储指令,该指令用于控制处理器110或者GPU进行操作以支持实现根据本发明任意实施例的三维场景的渲染方法。本领域技术人员可以根据本发明所公开方案设计指令。关于指令如何控制处理器进行操作,这是本领域公知,故在此不再详细描述。
本领域技术人员应当理解,尽管在图3中示出了终端设备100的多个装置,但是,本发明实施例的终端设备100可以仅涉及其中的部分装置,例如,只涉及处理器110、存储器120、显示装置160等。
图3所示的硬件设备仅仅是解释性的,并且决不是为了要限制本发明、其应用或用途。
<方法实施例>
本实施例提供了一种三维场景的渲染方法,该方法例如由图3中的终端设备100实施。如图4所示,该方法包括以下步骤S1100-S1300:
步骤S1100,获取三维场景中被上层图形遮挡的区域。
本实施例中的三维场景是通过三维技术进行显示的立体图形,例如是游戏中的立体画面,又例如是商品展示中的立体画面。
在一个例子中,对三维场景的显示处理分为建模和渲染两个阶段,其中建模是建立包含物体关键参数的三维模型,渲染是将建立好的三维模型增加视角、光影、纹理等效果并显示在屏幕上。
本实施例中,上层图形是显示于三维场景之上、会对三维场景造成遮挡的图形,例如是用户界面中的界面图形、三维游戏中角色的头顶文字、用于视频播放的“小窗口”等。本实施例中的上层图形可以是二维图形,也可以是三维图形,在此不做限定。
本实施例中,上层图形对三维场景造成的遮挡,使得被遮挡区域最终显示在屏幕上时变得不可见或者几乎不可见。
在一个例子中,上述获取三维场景中被上层图形遮挡的区域的具体实施方式为:根 据上层图形中像素的RGBA信息,获得三维场景中被上层图形遮挡的区域。
像素是图像中不可分割的最小单元或元素。上层图形中的每一像素都具有与其对应的RGBA信息,其中R代表红色(Red),G代表绿色(Green),B代表蓝色(Blue),A代表不透明度(Alpha)。
在该例子中,由于上层图形始终位于三维场景之上,因此可以根据上层图形中像素的RGBA信息,判断该像素位置对应的三维场景是否可见,即获得上层图形和三维场景的遮挡关系。
在一个更具体的例子中,可以根据上层图形中像素的不透明度获取三维场景中被上层图形遮挡的区域。
像素的不透明度可以反映像素自身的透明程度或者可视度。对于上层图形中完全透明的像素,下方的三维场景可透过该像素得到显示,因此该像素不会对三维场景造成遮挡。对于上层图形中完全不透明的像素,下方三维场景无法透过该像素,因此该像素会对三维场景造成遮挡。对于半透明的像素,可以根据该像素具体的不透明程度判断该像素对三维场景的遮挡情况。
在该例子中,像素不透明度的取值范围为0%-100%。其中,不透明度为0%表示该像素完全透明,不透明度为100%代表该像素完全不透明,不透明度位于两者之间代表该像素为半透明。
在其他更具体的例子中,也可以根据上层图形中像素的RGB颜色信息获取三维场景中被上层图形遮挡的区域。例如,对于上层图形中RGB颜色为纯黑色的像素,认为该像素对下层的三维场景造成了遮挡。具体判断标准可结合实际情况确定,在此不做限制。
获取三维场景中被上层图形遮挡的区域之后,再执行以下步骤S1200:
步骤S1200,设置三维场景的对应于被遮挡区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,该目标值使得该区域在进行三维场景的渲染时被省略。
本实施例中,设定数据是表示内容可见性的数据。例如,当设定数据的数值取为第一数值时,该设定数据对应的三维场景区域可见,即该设定数据对应的三维场景区域会被渲染至屏幕。当设定数据的数值取为不同于第一数值的第二数值时,该设定数据对应的三维场景区域不可见,即该设定数据对应的三维场景区域不会被渲染至屏幕。
本实施例中,设定数据例如是深度缓冲区中的深度数据、模板缓冲区中的模板数据、光线追踪渲染中表示内容可见性的数据等。
本实施例中,设定数据的目标值使得三维场景中的被遮挡区域在渲染时被省略。例如,该目标值是上述第一数值,通过将被遮挡区域对应的设定数据设置为该目标值,可以使被遮挡区域不被渲染至屏幕,即被遮挡区域在渲染时被省略。
在一个例子中,上述设置三维场景的对应于被遮挡区域的设定数据的数值为目标值,包括:获取被遮挡区域对应的像素;在三维场景的用于存储设定数据的缓冲区中,查找对应每一像素的数值;修改查找到的每一数值为目标值。
在该例子中,三维场景中被上层图形遮挡的区域对应于多个像素,并且对于其中的每一个像素,三维场景在该像素位置被上层图形遮挡。
在步骤S1100获取三维场景中被上层图形遮挡的区域时,可以采用逐像素处理的方式,即遍历三维场景中的每个像素,判断每个像素位置对应的三维场景与上层图形的遮挡情况,将对应三维场景被上层图形遮挡的像素集合起来,就得到了三维场景中被上层图形遮挡的区域。在上述方式中,同时获得了被遮挡区域对应的像素。
在该例子中,三维场景的用于存储设定数据的缓冲区,是存储器中用于存储该设定数据的专门区域,例如包括深度缓冲区、模板缓冲区等。通过对上述缓冲区进行读写操作,便可以查找对应每一像素的设定数据的数值,并将该数值修改为目标值。
在一个例子中,步骤S1200中的设定数据为深度数据,步骤S1200中的目标值使得被遮挡区域不能通过渲染显示中的深度测试,即基于深度测试来避免渲染被遮挡区域。
在该例子中,深度测试是基于深度数据对显示内容进行取舍的过程,可解决因三维模型间的相互遮挡造成的特定视角下的可见性问题。
参见图5,图5中从左到右依次为(渲染显示中虚拟的)摄像机、P1平面和P2平面,其中P1平面和P2平面均是三维场景中的模型。A1和A2分别是位于P1平面和P2平面的点,并且A1点和A2点均成像于摄像机中的像素A。A1点到摄像机的距离即为A1点的深度Z1,A2点到摄像机的距离即为A2点的深度Z2。容易理解,A1点相对于A2点距离摄像机更近,即Z1小于Z2,因此在成像时A1点会遮挡A2点,使得A2点不会显示在像素A处。
对于图5所示的情形,深度测试的具体过程例如是:首先,将深度缓冲区(depth buffer)中对应像素A的深度数据的进行初始化设置,例如设置为远裁剪面的深度值Max。这里的远裁剪面是相对于摄像机的最远绘制位置,类似地,近裁剪面是相对于摄像机的最近绘制位置。之后,在绘制三维场景的具体内容时,比较待绘制的点的深度值是否小于上述深度缓冲区中的当前深度值,若是则绘制该点并将深度缓冲区中的深度值更新为 该点的深度值,若否则将该点丢弃不做处理。例如,先绘制A1点时,将A1点的深度值Z1与初始深度值Max进行比较,容易知道Z1<Max,因此将A1点(的颜色)绘制于像素A处,并将上述深度缓冲区中的深度值更新为Z1。再绘制A2点时,将A2点的深度值Z2与上述深度缓冲区中的当前深度值Z1进行比较,容易知道Z2>Z1,因此将A2点丢弃,不进行绘制。这样,通过深度测试可以使三维场景中距离摄像机距离近的点被绘制到屏幕上,被该点遮挡的点不被绘制到屏幕上。
在该例子中,将深度测试应用于三维场景和上层图形的遮挡显示中。对于三维场景中被上层图形遮挡的区域,将被遮挡区域对应的深度缓冲区中的深度数据的数值,设置为使得被遮挡区域不能通过深度测试的目标值。
在该例子中,可以基于被遮挡区域对应的像素设置被遮挡区域对应的深度缓冲区中的深度数据的数值。例如,被遮挡区域包括X、Y、Z三个像素,在深度缓冲区内存在依次与X、Y、Z三个像素对应的深度数据x、y、z,将x、y、z的数值设置为目标数值,即实现了对被遮挡区域对应的深度数据的数值设置。
在该例子中,目标数值是使被遮挡区域不能通过深度测试的数值,即目标数值使得被遮挡区域对应的每一像素均不能通过深度测试。例如,在深度测试的通过条件为待测深度值小于深度缓冲区中的当前深度值的情况下,选择目标数值为0,也就是被遮挡区域中每一像素对应的深度缓冲区中深度数据的初始值设置为0。这里对深度数据进行了归一化处理,近裁剪面的深度为0,远裁剪面的深度为1。这样,由于三维场景中每一点的深度数值均大于或等于0,对于被遮挡区域对应的每一像素,其对应的三维模型上点的深度值都不满足深度值小于0的通过条件,无法被绘制到屏幕上,即该像素不能通过深度测试。
在该例子中,目标数值可根据具体的测试通过条件来确定,在此不做限定。
在该例子中,基于深度测试能够有效避免渲染三维场景中被上层图形遮挡区域,并且性能消耗较低。
在一个例子中,步骤S1200中的设定数据为模板缓冲区中的模板数据,步骤S1200中的目标值使得被遮挡区域不能通过渲染显示中的模板测试,即基于模板测试来避免渲染被遮挡区域。
在该例子中,模板测试是基于模板数据对显示内容进行取舍的过程,其实施过程与深度测试类似,也是将三维模型上点的模板值与模板缓冲区中的当前模板值进行比较,将满足测试通过条件的点绘制到对应像素上。模板测试可用于实现显示三维场景中的物 体边框等效果。例如,事先将物体边框区域对应的模板值设置为1,将非边框区域对应的模板值设置其他数值。将模板测试的通过条件设置为待测模板值等于模板缓冲区中的当前模板值,并将模板缓冲区中的初始模板值设置为1。容易理解,只有边框区域通过模板测试并被绘制到屏幕上。
在该例子中,将模板测试应用于三维场景和上层图形的遮挡显示中。对于三维场景中被上层图形遮挡的区域,将被遮挡区域对应的模板缓冲区中的模板数据的数值,设置为使得被遮挡区域不能通过模板测试的目标值。例如,在三维场景的模板值均大于0,并且模板测试通过条件为待测模板值等于模板缓冲区中的当前模板值的情况下,将被遮挡区域对应的模板缓冲区的模板数据的初始值设置为-1。这样,被遮挡区域不能通过模板测试,不会被渲染至屏幕。
在该例子中,基于模板测试能够有效避免渲染被遮挡区域,并且性能消耗较低。
需要说明的是,可以基于深度测试、模板测试中的任意一种将设定数据设置为目标值,也可同时基于两种测试将设定数据设置为目标值,在此不做限定。
设置上述设定数据为目标数值后,执行以下步骤S1300:
步骤S1300,根据设置后的三维场景的设定数据,将三维场景渲染至屏幕。
本实施例中,根据设置后的三维场景的设定数据对三维场景进行渲染。容易理解,三维场景中被上层图形遮挡的区域在渲染时被省略,未被上层图形遮挡的区域按照正常方式渲染至屏幕。渲染的具体情形可参见上文描述目标数值时涉及到的深度测试和模板测试过程,此处不再赘述。
本实施例中的渲染方法先获取三维场景中被上层图形遮挡的区域,再对被遮挡区域对应的可见性数据进行设置,使得该区域在渲染时被省略,从而避免不必要的开销,有利于提高渲染性能。
本实施例还提供另一种三维场景的渲染方法,该方法适用于界面图形和三维场景共同显示的情形,同样可以由图3所示的终端设备100实施。如图6所示,该方法包括以下步骤S2100-S2300:
步骤S2100,获取三维场景中被界面图形遮挡的区域。
步骤S2200,设置三维场景的对应于被遮挡区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,该目标值使得该区域在进行三维场景的渲染时被省略。
步骤S2300,根据设置后的三维场景的设定数据,将三维场景渲染至屏幕,及将界面图形渲染至屏幕。
步骤S2100-S2300的具体实施方式可参见上文对步骤S1100-S1300的描述。
本实施例中的界面图形是人机交互中交互界面的图形。该界面图形显示于三维场景上方。
在一个例子中,本实施例提供的三维图形渲染方法还包括:将界面图形绘制至渲染目标缓冲区。此时,步骤S2300中将界面图形渲染至屏幕的步骤包括:将渲染目标缓冲区中的界面图形绘制至屏幕。
在该例子中,渲染目标缓冲区是渲染显示中的一个内存区域,可用于存储绘制的图形。通过将界面图形先绘制至渲染目标缓冲区,再将渲染目标缓冲区中的界面图形绘制至屏幕,有利于界面图形和三维场景在屏幕上同步显示。
在其他实施例中,也可以将界面图形直接绘制到屏幕上,在此不做限定。
在该例子中,步骤S2100获取三维场景中被界面图形遮挡的区域,包括:根据渲染目标缓冲区中的界面图形,获得界面图形中像素的RGBA信息;根据RGBA信息,获得三维场景中被界面图形遮挡的区域。即该例子中是基于渲染目标缓冲区中绘制的界面图形,来获取界面图形的RGBA信息。通过这种方式,可以方便获取到最终显示的界面图形的RGBA信息。
下面以图7为例说明本实施例中三维场景渲染方法的实施过程。在图7所示的例子中,三维场景7-1之上显示有关于退出操作的界面图形7-2,最终的显示效果如图7中的7-5所示。在对该三维场景进行渲染时,首先获取三维场景7-1中被界面图形7-2遮挡的区域7-3。先将界面图形7-2绘制至渲染目标缓冲区,需要说明的是,图7中的图形仅用于演示渲染过程,不代表相关图形已经显示在屏幕上,该界面图形7-2为长方形,显示有“是否退出”的文字提示以及对应于“是”和“否”的选择按钮,并且该界面图形7-2对应的所有像素的不透明度均为100%,即该界面图形对应的所有像素均不透明,会对下方的三维场景造成遮挡。据此,可以获得三维场景中被界面图形7-2遮挡的区域7-3。对三维场景中对应于该区域7-3的设定数据的数值进行修改,例如将三维场景7-1中对应于该区域7-3的深度缓冲区中的深度值修改为0,使得对应的三维模型上点的深度值都不满足深度值小于0的通过条件,无法被绘制到屏幕上。之后,根据设置后的三维场景的设定数据对三维场景进行渲染,得到的渲染结果如7-4所示,可以看出三维场景中被界面图形7-2遮挡的区域7-3没有被渲染,表现为7-4中的黑色区域。在渲染三 维场景的同时,将渲染目标缓冲区中的界面图形7-2也绘制至屏幕,使三维场景7-1和界面图形7-2得以共同显示,最终渲染结果如7-5所示。
本实施例还提供另一种三维场景的渲染方法,该方法中的渲染为光线追踪渲染,该方法同样可以由图3所示的终端设备100实施,包括以下步骤S3100-S3300:
步骤S3100,获取三维场景中被上层图形遮挡的区域。
该上层图形例如可以是界面图形,还可以是游戏应用中人物的头顶文字等,在此不做限定。
步骤S3200,设置三维场景的对应于被遮挡区域的设定数据的数值为目标值,其中,设定数据为光线追踪渲染中表示内容可见性的数据,该目标值使得该区域在进行三维场景的渲染时被省略。
步骤S3300,根据设置后的三维场景的设定数据,将三维场景渲染至屏幕,及将界面图形渲染至屏幕。
步骤S3100-S3300的具体实施方式可参见上文对步骤S1100-S1300的描述。
本实施例中,光线追踪渲染是一种三维场景的渲染方法,通过根据从观察者出发的光线来获取每一像素显示的三维模型的颜色,能够实现更精准的三维显示效果。
本实施例中,设定数据为光线追踪渲染中表示内容可见性的数据,根据该设定数据可确定是否对像素进行光线追踪处理。例如,在一个例子中,设定数据可以取值为0或1,某一像素对应的设定数据为1,表示对该像素进行光线追踪处理,某一像素对应的设定数据为0,表示不对该像素进行光线追踪处理。容易理解,该例子中三维场景被上层图形遮挡的区域的目标值为0,未被遮挡的区域的目标值为1。如此,能够在光线追踪渲染中避免渲染被遮挡区域,从而提高光线追踪渲染的渲染性能。
本实施例还提供了另外一种三维场景的渲染方法,例如由图3所示的终端设备100实施,该方法包括以下步骤S4100-S4400:
步骤S4100,响应于设定的显示刷新事件,加载与用户操作相匹配的下一帧三维场景的数据。
在将本实施例的渲染方法应用在三维游戏场景的渲染的例子中,该设定的显示刷新事件例如可以包括根据设定帧率所确定的刷新时间到时、接收到外部触发的刷新指令、及恢复与服务器之间的网络连接中的至少一项。
例如,设定帧率为60帧/秒,即设定每秒刷新60帧,则在根据该帧率所确定的刷新时间到时时,即将加载与用户操作相匹配的下一帧三维场景的数据进行渲染,以实现刷新显示。
又例如,用户通过游戏界面设置的刷新按键触发刷新指令时,终端设备100也将响应于该刷新指令,加载与用户操作相匹配的下一帧三维场景的数据进行渲染,以实现刷新显示。
再例如,在终端设备100因网络异常与服务器断开连接后,终端设备100将在恢复与服务器的网络连接后,从服务器获取最新的基础数据,此时,将基于该基础数据,记载与用户操作相匹配的下一帧三维场景的数据进行渲染,以实现刷新显示。
在一个例子中,本实施例中的三维场景为三维游戏场景,用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
本实施例中,特定的用户操作会导致下一帧三维场景和/或下一帧界面图形发生变化。例如,如图8所示,在8-1中,当前显示内容中只存在三维场景,此时用户进行轻触按键操作,该操作能够唤出8-2所示的退出界面,该退出界面会作为下一帧界面图形进行显示,即下一帧界面图形相对于现有界面图形发生变化。又例如,用户操作游戏人物在三维场景中移动,则下一帧三维场景将与游戏人物的移动位置和移动方向有关,即与用户操作相匹配。
步骤S4200,根据加载到的数据,获取下一帧三维场景中被下一帧界面图形遮挡的区域。
在用户操作会改变界面图形的情况下,该下一帧界面图形也将与用户操作相匹配。
步骤S4300,设置下一帧三维场景的对应于该区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得区域在进行下一帧三维场景的渲染时被省略。
步骤S4400,根据设置后的下一帧三维场景的设定数据,将下一帧三维场景渲染至屏幕,再将下一帧界面图形渲染至屏幕。
通过步骤S4400,将完成下一帧显示内容的刷新。
上述步骤S4200-S4400可参见上文对步骤S1100-S1300的描述。
本实施例还提供了另外一种三维场景的渲染方法,例如由图3所示的终端设备100实施,该方法包括以下步骤S5100-S5400:
步骤S5100,响应于设定的显示刷新事件,获取与用户操作相匹配的下一帧界面图形。
在一个例子中,本实施例中的三维场景为三维游戏场景,用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
本实施例中,特定的用户操作对应于特定的界面图形。例如,如图8所示,8-2所示的退出界面对应于8-1中用户轻触按键的操作。
本实施例中,终端设备响应于用户操作,根据预设的对应关系,获取对应于该用户操作的界面图形信息,例如获取界面图形的内容、位置、尺寸、不透明度等参数。
步骤S5200,获取下一帧三维场景中被该下一帧界面图形遮挡的区域。
在用户操作会改变三维场景的情况下,该下一帧三维场景也将与用户操作相匹配。
步骤S5300,设置下一帧三维场景的对应于该区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得该区域在进行下一帧三维场景的渲染时被省略。
步骤S5400,根据设置后的下一帧三维场景的设定数据,将下一帧三维场景渲染至屏幕,再将下一帧界面图形渲染至屏幕。
继续如图8所示的例子,在进行显示刷新之前,如果用户轻触按键触发的退出操作,则在进行显示刷新时,移动终端100根据检测到的轻触按键的用户操作,获取与该操作相对应的界面图形作为下一帧界面图形,例如该操作对应8-2所示的退出界面,并在下一帧渲染时,获取下一帧三维场景中被该退出界面遮挡的区域,以设置下一帧三维场景的对应于该区域的设定数据的数值为目标值,进而使得该区域在进行下一帧三维场景的渲染时被省略。该例子中,在根据步骤S5400完成下一帧三维场景和下一帧界面图形的渲染显示后,得到8-3的显示画面。
上述步骤S5200-S5400可参见上文对步骤S1100-S1300的描述。
本实施例还提供了另外一种三维场景的渲染方法,例如由图3所示的终端设备100实施,该方法包括以下步骤S6100-S6300;
步骤S6100,获取三维场景的不可见区域。
在一个例子中,获取三维场景的不可见区域,包括以下任意一项或任意多项的组合:获取三维场景中被上层图形遮挡的区域;获取三维场景中成像尺寸小于预设阈值的物体所对应区域;获取三维场景中当前视角主体的非关注区域。
在该例子中,获取三维场景中被上层图形遮挡的区域,可以参见上文对步骤S1100的描述。
在该例子中,成像尺寸是三维场景中的物体在屏幕上所成图像的尺寸。可以根据物体到屏幕的距离、物体自身尺寸等信息获得物体的成像尺寸。预设阈值是预设的成像尺寸。在物体成像尺寸小于预设阈值的情况下,该物体对应图像大小显著小于整个三维场景对应的图像大小,可以认为该物体对应的区域为不可见区域。在三维渲染时,对此类物体对应的区域无需进行渲染计算,可采取直接显示背景等处理方式。
在该例子中,当前视角主体例如是以主视角观察三维场景的人物。对于具有不同性格、身份等特质的视角主体,其观察三维场景时的关注区域(或者感兴趣区域)是不同的。例如,对于女性人物,其对场景中花朵、蝴蝶等细节通常会感兴趣,而男性人物对此通常不会特别关注,因此可以认为花朵、蝴蝶对应的区域是男性人物的非关注区域。在三维渲染时,当前视角主体的非关注区域无需进行渲染计算,可采取直接显示背景等处理方式。
在一个例子中,获取三维场景的不可见区域,包括:提供设置不可见区域的设置入口;获取通过设置入口输入的不可见区域。
在该例子中,用户可以对不可见区域进行自定义设置。例如,用户可取消画面中的粒子效果,即将三维场景中粒子对应的区域设置为不可见区域。根据用户通过设置入口输入的数据,可以直接获得不可见区域。
在一个例子中,获取三维场景的不可见区域,包括:提供设置三维场景中可见区域的设置入口;获取通过设置入口输入的可见区域;根据可见区域,获得不可见区域。
在该例子中,用户可以对可见区域进行自定义设置,可见区域之外的部分均为不可见区域,即用户对不可见区域进行反向设置。
步骤S6200,设置三维场景的对应于不可见区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得不可见区域在进行三维场景的渲染时被省略。
步骤S6300,根据设置后的三维场景的设定数据,将三维场景渲染至屏幕。
上述步骤S6200-S6300可参见上文对步骤S1200-S1300的描述。
<装置实施例>
本实施例提供一种三维场景渲染装置,该装置例如是图9所示的三维场景渲染装置 700,包括遮挡关系获取模块710、数据设置模块720和渲染模块730。
该遮挡关系获取模块710可以用于获取三维场景中被上层图形遮挡的区域。该上层图形例如是界面图形,例如人机交互界面图形。
该数据设置模块720可以用于设置三维场景的对应于区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得区域在进行三维场景的渲染时被省略。
该渲染模块730可以用于根据设置后的三维场景的设定数据,将三维场景渲染至屏幕。
在一个例子中,数据设置模块720在设置三维场景的对应于区域的设定数据的数值为目标值时,可以用于:获取区域对应的像素;在三维场景的用于存储设定数据的缓冲区中,查找对应每一像素的数值;修改查找到的每一数值为目标值。
在一个例子中,设定数据为深度缓冲区中的深度数据,目标值使得区域不能通过渲染显示中的深度测试。
在一个例子中,深度测试的测试通过条件为待测深度值小于深度缓冲区中的当前深度值,数据设置模块720在设置三维场景的对应于区域的设定数据的数值为目标值时,可以用于:设置三维场景的对应于区域的设定数据的数值等于近裁剪面对应的深度值。
在一个例子中,设定数据为模板缓冲区中的模板数据,目标值使得区域不能通过渲染显示中的模板测试。
在一个例子中,遮挡关系获取模块710在获取三维场景中被上层图形遮挡的区域时,可以用于:根据上层图形中像素的RGBA信息,获得三维场景中被上层图形遮挡的区域。
在一个例子中,上层图形中像素的RGBA信息包括上层图形中像素的不透明度。
在另一个实施例中,该遮挡关系获取模块710可以用于获取三维场景中被界面图形遮挡的区域;该数据设置模块720可以用于设置三维场景的对应于区域的设定数据的数值为目标值,其中,设定数据为表示内容可见性的数据,目标值使得该区域在进行三维场景的渲染时被省略;以及,该渲染模块730可以用于根据设置后的三维场景的所述设定数据,将三维场景渲染至屏幕,及将界面图形也渲染至屏幕。
例如,可以先将三维场景渲染至屏幕,再将界面图形渲染至屏幕。
在一个例子中,该三维场景渲染装置还可以包括绘制模块,该绘制模块用于将界面图形绘制至渲染目标缓冲区,以使得渲染模块730在进行将界面图形渲染至屏幕时,可以将渲染目标缓冲区中的界面图形绘制至屏幕。
在一个例子中,该遮挡关系获取模块710在获取三维场景中被界面图形遮挡的区域时,可以用于:根据渲染目标缓冲区中的界面图形,获得界面图形中像素的RGBA信息;以及,根据该RGBA信息,获得三维场景中被界面图形遮挡的区域。
在另一个实施例中,该渲染为光线追踪渲染,该遮挡关系获取模块710可以用于获取三维场景中被上层图形遮挡的区域;该数据设置模块720可以用于设置三维场景的对应于区域的设定数据的数值为目标值,其中,设定数据为光线追踪渲染中表示内容可见性的数据,目标值使得该区域在进行三维场景的渲染时被省略;以及,该渲染模块730可以用于根据设置后的三维场景的所述设定数据,将三维场景渲染至屏幕。
<电子设备实施例>
本实施例一种电子设备,该电子设备包括本发明装置实施例中的三维场景渲染装置;或者,该电子设备为图10所示的电子设备800,包括:
存储器810,用于存储可执行命令。
处理器820,用于在存储器810存储的可执行命令的控制下,执行实施例一中的图片处理方法。
本实施例中,该电子设备可以为任意的终端设备,例如PC机、笔记本电脑、平板电脑、手机、头戴装置等具有显示装置的终端设备,在此不做限定。
<计算机可读存储介质实施例>
本实施例提供一种计算机可读存储介质,存储有可执行命令,该可执行命令被处理器执行时,执行如本发明方法实施例中描述的方法。
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆 棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指 令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人物来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人物来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人物能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。

Claims (23)

  1. 一种三维场景的渲染方法,包括:
    获取所述三维场景中被上层图形遮挡的区域;
    设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
    根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
  2. 根据权利要求1所述的方法,其中,所述设置所述三维场景的对应于所述区域的设定数据的数值为目标值,包括:
    获取所述区域对应的像素;
    在所述三维场景的用于存储所述设定数据的缓冲区中,查找对应每一所述像素的所述数值;
    修改查找到的每一所述数值为所述目标值。
  3. 根据权利要求1所述的方法,其中,所述设定数据为深度缓冲区中的深度数据,所述目标值使得所述区域不能通过渲染显示中的深度测试。
  4. 根据权利要求3所述的方法,其中,所述深度测试的测试通过条件为待测深度值小于所述深度缓冲区中的当前深度值,所述设置所述三维场景的对应于所述区域的设定数据的数值为目标值,包括:
    设置所述三维场景的对应于所述区域的设定数据的数值等于近裁剪面对应的深度值。
  5. 根据权利要求1所述的方法,其中,所述设定数据为模板缓冲区中的模板数据,所述目标值使得所述区域不能通过渲染显示中的模板测试。
  6. 根据权利要求1所述的方法,其中,所述获取所述三维场景中被上层图形遮挡的区域,包括:
    根据所述上层图形中像素的RGBA信息,获得所述三维场景中被上层图形遮挡的区 域。
  7. 根据权利要求6所述的方法,其中,所述上层图形中像素的RGBA信息包括所述上层图形中像素的不透明度。
  8. 一种三维场景的渲染方法,包括:
    获取所述三维场景中被界面图形遮挡的区域;
    设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
    根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕,及将所述界面图形渲染至所述屏幕。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    将所述界面图形绘制至渲染目标缓冲区;
    所述将所述界面图形渲染至屏幕,包括:
    将所述渲染目标缓冲区中的界面图形绘制至屏幕。
  10. 根据权利要求9所述的方法,其中,所述获取所述三维场景中被界面图形遮挡的区域,还包括:
    根据所述渲染目标缓冲区中的界面图形,获得所述界面图形中像素的RGBA信息;
    根据所述RGBA信息,获得所述三维场景中被界面图形遮挡的区域。
  11. 一种三维场景的渲染方法,其中,所述渲染为光线追踪渲染,所述方法包括:
    获取所述三维场景中被上层图形遮挡的区域;
    设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为光线追踪渲染中表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
    根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
  12. 一种三维场景的渲染方法,包括:
    响应于设定的显示刷新事件,加载与用户操作相匹配的下一帧三维场景的数据;
    根据所述数据,获取所述下一帧三维场景中被下一帧界面图形遮挡的区域;
    设置所述下一帧三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述下一帧三维场景的渲染时被省略;
    根据设置后的所述下一帧三维场景的所述设定数据,将所述下一帧三维场景渲染至屏幕,再将所述下一帧界面图形渲染至所述屏幕。
  13. 根据权利要求12所述的方法,其中,所述三维场景为三维游戏场景,所述用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
  14. 根据权利要求12所述的方法,其中,所述三维场景为三维游戏场景,所述显示刷新事件包括根据设定帧率所确定的刷新时间到时、接收到外部触发的刷新指令、及恢复与服务器之间的网络连接中的至少一项。
  15. 一种三维场景的渲染方法,包括:
    响应于设定的显示刷新事件,加载与用户操作相匹配的下一帧界面图形的数据;
    根据所述数据,获取下一帧三维场景中被所述下一帧界面图形遮挡的区域;
    设置所述下一帧三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述下一帧三维场景的渲染时被省略;
    根据设置后的所述下一帧三维场景的所述设定数据,将所述下一帧三维场景渲染至屏幕,再将所述下一帧界面图形渲染至所述屏幕。
  16. 根据权利要求15所述的方法,其中,所述三维场景为三维游戏场景,所述用户操作包括键盘操作、鼠标操作、触摸屏操作、肢体感应操作和重力感应操作中的任意一项或者多项。
  17. 一种三维场景的渲染方法,包括;
    获取所述三维场景的不可见区域;
    设置所述三维场景的对应于所述不可见区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述不可见区域在进行所述三维场景的渲染时被省略;
    根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
  18. 根据权利要求17所述的方法,其中,所述获取所述三维场景的不可见区域,包括以下任意一项或任意多项的组合:
    获取所述三维场景中被上层图形遮挡的区域;
    获取所述三维场景中成像尺寸小于预设阈值的物体所对应区域;
    获取所述三维场景中当前视角主体的非关注区域。
  19. 根据权利要求17所述的方法,其中,所述获取所述三维场景的不可见区域,包括:
    提供设置所述不可见区域的设置入口;
    获取通过所述设置入口输入的所述不可见区域。
  20. 根据权利要求17所述的方法,其中,所述获取所述三维场景的不可见区域,包括:
    提供设置所述三维场景中可见区域的设置入口;
    获取通过所述设置入口输入的所述可见区域;
    根据所述可见区域,获得所述不可见区域。
  21. 一种三维场景的渲染装置,包括:
    遮挡关系获取模块,用于获取所述三维场景中被上层图形遮挡的区域;
    数据设置模块,用于设置所述三维场景的对应于所述区域的设定数据的数值为目标值,其中,所述设定数据为表示内容可见性的数据,所述目标值使得所述区域在进行所述三维场景的渲染时被省略;
    渲染模块,用于根据设置后的所述三维场景的所述设定数据,将所述三维场景渲染至屏幕。
  22. 一种电子设备,包括如权利要求21所述的装置;或者,所述设备包括:
    存储器,用于存储可执行命令;
    处理器,用于在所述可执行命令的控制下,执行如权利要求1-20中任一项所述的方法。
  23. 一种计算机可读存储介质,存储有可执行命令,所述可执行命令被处理器执行时,执行如权利要求1-20中任一项所述的方法。
PCT/CN2020/115761 2019-09-19 2020-09-17 三维场景的渲染方法、装置及电子设备 WO2021052392A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910888032.2 2019-09-19
CN201910888032.2A CN112541960A (zh) 2019-09-19 2019-09-19 三维场景的渲染方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2021052392A1 true WO2021052392A1 (zh) 2021-03-25

Family

ID=74883373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115761 WO2021052392A1 (zh) 2019-09-19 2020-09-17 三维场景的渲染方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN112541960A (zh)
WO (1) WO2021052392A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115591230A (zh) * 2021-07-09 2023-01-13 花瓣云科技有限公司(Cn) 图像画面渲染方法及电子设备
CN113436325B (zh) * 2021-07-30 2023-07-28 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113963103A (zh) * 2021-10-26 2022-01-21 中国银行股份有限公司 一种三维模型的渲染方法和相关装置
CN116630516B (zh) * 2023-06-09 2024-01-30 广州三七极耀网络科技有限公司 一种基于3d特性的2d渲染排序方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831631A (zh) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 一种大规模三维动画的渲染方法及渲染装置
CN103785174A (zh) * 2014-02-26 2014-05-14 北京智明星通科技有限公司 一种游戏同一屏幕显示万人的方法和系统
CN104331918A (zh) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 基于深度图实时绘制室外地表遮挡剔除以及加速方法
US20170357405A1 (en) * 2016-06-10 2017-12-14 Hexagon Technology Center Gmbh Systems and Methods for Accessing Visually Obscured Elements of a Three-Dimensional Model
CN110136082A (zh) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 遮挡剔除方法、装置及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831631A (zh) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 一种大规模三维动画的渲染方法及渲染装置
CN103785174A (zh) * 2014-02-26 2014-05-14 北京智明星通科技有限公司 一种游戏同一屏幕显示万人的方法和系统
CN104331918A (zh) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 基于深度图实时绘制室外地表遮挡剔除以及加速方法
US20170357405A1 (en) * 2016-06-10 2017-12-14 Hexagon Technology Center Gmbh Systems and Methods for Accessing Visually Obscured Elements of a Three-Dimensional Model
CN110136082A (zh) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 遮挡剔除方法、装置及计算机设备

Also Published As

Publication number Publication date
CN112541960A (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2021052392A1 (zh) 三维场景的渲染方法、装置及电子设备
US8259103B2 (en) Position pegs for a three-dimensional reference grid
US8269767B2 (en) Multiscale three-dimensional reference grid
US8997021B2 (en) Parallax and/or three-dimensional effects for thumbnail image displays
WO2018188499A1 (zh) 图像、视频处理方法和装置、虚拟现实装置和存储介质
US20220319139A1 (en) Multi-endpoint mixed-reality meetings
US10140000B2 (en) Multiscale three-dimensional orientation
US20200084431A1 (en) Image processing apparatus, image processing method and storage medium
US10101891B1 (en) Computer-assisted image cropping
US11734899B2 (en) Headset-based interface and menu system
CN107038738A (zh) 使用经修改的渲染参数来显示对象
US10802784B2 (en) Transmission of data related to an indicator between a user terminal device and a head mounted display and method for controlling the transmission of data
US8972901B2 (en) Fast cursor location
CN104094193A (zh) 移动设备上的全3d交互
US11562545B2 (en) Method and device for providing augmented reality, and computer program
WO2020192175A1 (zh) 一种立体图形的标注方法、装置、设备及介质
US20230037750A1 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
US9483873B2 (en) Easy selection threshold
US9043707B2 (en) Configurable viewcube controller
KR102174264B1 (ko) 그림자 렌더링 방법 및 그림자 렌더링 장치
WO2024066752A1 (zh) 显示控制方法、装置、头戴显示设备及介质
WO2023197912A1 (zh) 图像的处理方法、装置、设备、存储介质和程序产品
WO2023005659A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品
US20150310833A1 (en) Displaying Hardware Accelerated Video on X Window Systems
CN114863066A (zh) 呈现物体真实遮挡关系的增强现实场景的生成方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20866903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20866903

Country of ref document: EP

Kind code of ref document: A1