CN116059632A - Scene rendering method, device, equipment and storage medium - Google Patents

Scene rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN116059632A
CN116059632A CN202310063943.8A CN202310063943A CN116059632A CN 116059632 A CN116059632 A CN 116059632A CN 202310063943 A CN202310063943 A CN 202310063943A CN 116059632 A CN116059632 A CN 116059632A
Authority
CN
China
Prior art keywords
scene
visible
information
rendering
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310063943.8A
Other languages
Chinese (zh)
Inventor
潘嘉荔
厉安达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Lemon Inc Cayman Island
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd, Lemon Inc Cayman Island filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310063943.8A priority Critical patent/CN116059632A/en
Publication of CN116059632A publication Critical patent/CN116059632A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a scene rendering method, a device, equipment and a storage medium, wherein the method comprises the following steps: responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position; searching a scene attribute information set predetermined relative to a target scene through object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position; and rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time. By utilizing the method, whether the object information to be rendered is visible or not at the current rendering execution time is screened, only the visible scene objects in the object information to be rendered are reserved, only the visible scene objects are rendered, the rendering effect is ensured, and meanwhile, the rendering computational power consumption of image processing is effectively reduced.

Description

Scene rendering method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computer vision, in particular to a scene rendering method, a device, equipment and a storage medium.
Background
Currently, rendering of three-dimensional space scenes is involved in applications such as virtual reality and games, and rendering implementation of three-dimensional space scenes depends on three-dimensional space scene materials set in a design stage, that is, scenes in three-dimensional space scenes rendered by rendering all need to be designed in advance and stored as scene materials.
In a traditional three-dimensional space scene rendering implementation, a central processing unit (Central processing unit, CPU) side invokes an interface to command a graphics processor (graphics processing unit, GPU) to render a pre-designed scene material. In practical applications, scene change rendering is continuously performed along with the change of the capturing view angle of the virtual camera in the three-dimensional scene, and each rendering is performed on all scenes contained in the scene, whether the rendered scenes are presented in the view angle of the scene camera or not.
The above-described frequent rendering of all scenes results in excessive consumption of GPU rendering effort. In an improved implementation, the view cone removing computation can be performed on the scene to be rendered by the CPU side to remove the scene which does not need to be rendered, however, the whole view cone removing computation also causes excessive consumption of CPU computing power resources.
Disclosure of Invention
The disclosure provides a scene rendering method, a device, equipment and a storage medium, so as to reduce the consumption of computing power resources of computer equipment in the scene rendering process.
In a first aspect, an embodiment of the present disclosure provides a scene rendering method, including:
responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position;
searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
In a second aspect, embodiments of the present disclosure further provide a scene rendering device, including:
the response module is used for responding to the scene rendering operation of the target scene at the current rendering execution time and obtaining the object information to be rendered and the current position;
the determining module is used for searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
And the rendering module is used for rendering the visible scene object and presenting a scene picture corresponding to the target scene at the current rendering execution time.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a scene rendering method according to any of the embodiments of the present invention.
In a fourth aspect, the presently disclosed embodiments also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a scene rendering method according to any of the embodiments of the invention.
According to the technical scheme of the embodiment of the disclosure, by the scene rendering method, the scene rendering operation of the target scene at the current rendering execution time is responded, and the object information to be rendered of the scene to be rendered and the current position of the virtual camera can be acquired; then, the object information to be rendered and the current position can be used for searching a scene attribute information set predetermined corresponding to the target scene, so that the visible scene object at the current rendering execution time can be determined, and finally, only the visible scene object can be rendered, thereby presenting a scene picture corresponding to the target scene at the current rendering execution time. According to the technical scheme, the object information to be rendered can be screened whether to be visible at the current rendering execution time by searching the scene attribute information set of the target scene at the rendering execution time of scene rendering, so that only visible scene objects in the object information to be rendered are reserved, and only the visible scene objects are rendered. On the basis of guaranteeing the rendering effect, the technical scheme is different from the existing rendering of all scene objects, the rendering calculation power consumption is effectively reduced by only rendering the visible scene objects, and in addition, the technical scheme is different from the existing mode of directly reducing the renderable scene objects through view cone rejection, can simply and effectively screen the invisible objects at the current position only by inquiring and matching the scene attribute information set, and effectively reduces the calculation resource consumption.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a scene rendering method according to an embodiment of the disclosure;
FIGS. 1 a-1 f are diagrams showing the effects of scene texture maps captured from different viewing angles at a certain position when determining a scene attribute information set in the scene rendering method according to the present embodiment;
fig. 2 is a schematic flow chart of a scene rendering method according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a scene rendering device according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
It should be noted that, the application scenario of the present embodiment may be described as follows: after running, the application such as the game application or the virtual reality application which depends on the three-dimensional space scene needs to perform scene rendering on the three-dimensional space scene, so that the three-dimensional game scene or the virtual display scene containing various scene objects is displayed. In the prior art, when scene rendering is performed, no matter where the virtual camera or the virtual object for performing scene image rendering is, all scene objects contained in the three-dimensional scene model need to be rendered, and all scene objects need to be rendered at each rendering moment. Considering that scene rendering relies primarily on a GPU, the large and frequent number of renderings described above creates excessive consumption of GPU computing power, thereby presenting a significant challenge to performance configuration of a computer device running applications such as games or virtual reality.
In the prior art, view cone rejection is performed on scene objects to be rendered before scene rendering, however, excessive consumption of computing power resources of computer equipment is caused in view cone rejection of scenes with larger scale, and great challenges are brought to performance configuration of the computer equipment.
The scene rendering method provided by the embodiment can greatly reduce the computing power resource consumption of the computer equipment in the scene rendering process on the basis of ensuring the scene rendering effect.
Fig. 1 is a schematic flow chart of a scene rendering method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a case of rendering a three-dimensional space scene, the method may be performed by a scene rendering device, and the device may be implemented in a form of software and/or hardware, and optionally, may be implemented by an electronic device, where the electronic device may be a mobile terminal, a PC end, a server, or the like.
As shown in fig. 1, the method of the embodiment of the disclosure may specifically include:
s101, responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position.
In this embodiment, the target scene may be a three-dimensional space application scene to be rendered in the running application software, where the three-dimensional space scene may be a game scene in a game application, a virtual reality scene in a virtual reality application, or a three-dimensional map scene in an application such as an electronic map. And the application software in the running state needs to perform scene rendering on the target scene in real time so as to ensure the picture presentation of the scene content in the application scene.
The embodiment may record a time point to be subjected to scene rendering in the running application software as a rendering execution time, and the rendering execution time may be, for example, a time corresponding to each frame of the frame. The embodiment may specifically use the current time point at which scene rendering is to be performed as the current rendering execution time. At each rendering execution time at which scene rendering is to be performed, execution of scene rendering is triggered by a scene rendering operation generated at the rendering execution time.
The step can respond to the scene rendering operation generated at the current rendering execution time, and determine all scene objects contained in the target scene through the response to the scene rendering operation, so that the scene object information to be rendered, which is executed by the target scene in the current rendering, can be used as object information to be rendered, and the object information to be rendered can be obtained. The object information to be rendered may include an object identifier of the scene object, where the object identifier may be a unique identifier allocated to the scene object when the scene object is created.
It should be noted that, when rendering a three-dimensional space scene, one of the main steps is to project the three-dimensional space scene into a two-dimensional plane for scene picture display, and the projection of the three-dimensional space scene onto the two-dimensional plane is mainly realized by means of a virtual camera in the three-dimensional space scene. It is known that a virtual camera in a three-dimensional space scene, similar to a real camera in a real three-dimensional space, also similar to a person's perspective in a real three-dimensional space, in which capturing of physical content can be performed by the real camera, thereby forming a two-dimensional photograph containing the physical content. Accordingly, the two-dimensional plane presented by the virtual three-dimensional space scene can be regarded as a two-dimensional plane in which a virtual camera captures a scene object in the three-dimensional space scene at a position of the three-dimensional space scene, or can be regarded as a picture content which can be viewed under the view angle of a set virtual object (such as a virtual character), thereby presenting the two-dimensional plane containing the picture content of the scene object.
In this embodiment, in order to implement the rendering of the target scene at the current rendering execution time, the current capturing position of the virtual camera at the current rendering time in the target scene or the position of the virtual object at the current rendering time needs to be obtained. It can be known that, when the screen rendering is performed at the view angle of the virtual object, the position where the virtual object moves can be regarded as the capturing position of the virtual object at the view angle, so, for convenience of description, the present embodiment is described below to extend the implementation description of the scene rendering method by regarding the current capturing position of the virtual camera as, but not limited to, the current position.
It will be appreciated that in different virtual space scenes, the capturing position of the virtual camera may move with the movement of the associated scene character, and the scene captured by the virtual camera corresponds to the scene that the associated scene character can see at the current position of the virtual space scene. For example, as in a virtual reality application, a virtual camera corresponds to being associated with a user wearing a virtual reality device, and the virtual reality scene captured by the virtual camera may be considered a scene currently viewable by the user. When the position of the user wearing the virtual reality device changes (e.g., walks forward, backs up, jumps, etc.), the capturing position of the virtual camera also changes, and the frame presented by the scene rendering also changes.
S102, searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time.
It can be known that in the existing scene rendering, after determining the object information to be rendered and the current capturing position of the virtual camera, an interface command can be called to render the object information to be rendered, and then the scene picture in the capturing range is presented based on the current capturing position of the virtual camera, wherein the scene picture contains the picture content of the scene object in the capturing range after being rendered. When the target scene is a large scene including large-scale scene objects, the existing scene rendering mode can cause excessive consumption of image processing calculation force resources.
In this embodiment, this step is equivalent to screening before rendering object information to be rendered, and this step can determine whether the object information to be rendered can be used as a camera-capturable object when the virtual camera is at the current capturing position by querying a scene attribute information set of the target scene, and determine the object information to be rendered as a visible scene object at the current rendering execution time when the object information to be rendered is used as the camera-capturable object.
The step determines all visible scene objects which can be captured at the current capturing position by inquiring the scene attribute information set for the obtained object information to be rendered, and the visible scene objects are equivalent to the visible scene objects which are determined at the current rendering execution time. The virtual camera captures scene objects in the capturing range of the current capturing position to form a scene picture which is equivalent to picture information only including visible scene objects; the selected invisible scene objects cannot be displayed in the rendered scene picture because of the current capturing position of the virtual camera even if scene rendering is performed. Therefore, before the object information to be rendered in the target scene is rendered, the visual scene objects in the step are determined, so that the picture effect of the rendered scene picture can be ensured, and the consumption of the image processing calculation force of scene rendering can be effectively saved.
In this embodiment, the scene attribute information set may be understood as a scene information set related to rendering of the target scene, where the scene attribute information set includes visible object information related to a capturing position of the virtual camera, further includes object related information of a scene object in the target scene, and the scene attribute information set may be determined in a creation stage of the target scene.
Specifically, when logic for searching the scene attribute information set is executed, the step can determine whether the object information to be rendered belongs to scene objects contained in the target scene through the contained object related information (in practical application, some situations which do not belong to the scene objects but can also be used as the object information to be rendered exist), if not, the object information to be rendered can be directly determined to be the visible scene object at the current rendering moment, if so, the searching of the visible object information can be performed in the scene attribute information set through the current capturing position, and when the fact that the visible object information matched with the current capturing position contains the object information to be rendered is determined, the corresponding object to be rendered is determined to be the visible scene object at the current rendering moment.
It should be noted that, the scene attribute information set of the target scene may be determined in an offline creation stage of the target scene, and the key to determining the scene attribute information set is to determine the included visible object information.
One implementation of visible object information determination, among others, may be described as: in the offline creation stage of the target scene, sampling of the capturing positions of the virtual camera can be performed in the space region of the target scene model associated with the target scene, so that a certain number of sampling points are determined and can be respectively used as the capturing positions of the virtual camera; and then, capturing pictures which can be presented by the virtual camera at each capturing position can be acquired, each capturing picture can be formed based on the color data information of the scene objects in the target scene, and the color data information of the scene objects in the target scene can be determined after the target scene is completely created and offline scene rendering is performed.
In view of the above description, it is possible to determine which scene objects are specifically included in the captured picture based on the color data information included in the obtained captured picture, and the included scene objects can be regarded as scene objects that can be captured by the virtual camera at the capturing position thereof; finally, based on the scene objects which are captured at different sampling positions and correspond to the different sampling positions, the scene objects can be used for forming visible object information which is related to the positions in the scene attribute information set.
And S103, rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
In this embodiment, after the visible scene object at the current rendering time is screened from the scene objects to be rendered through the steps, only the visible scene object can be rendered through the steps, and a scene image formed by rendering includes scene objects which can be captured by the virtual camera within the capturing range at the current capturing position. The scene picture can also be regarded as a picture which is presented on the screen of the device after scene rendering at the current rendering time.
According to the scene rendering method provided by the embodiment, the object information to be rendered is screened whether to be visible at the current rendering execution time or not by searching the scene attribute information set of the target scene at the rendering execution time of scene rendering, so that only visible scene objects in the object information to be rendered are reserved, and only visible scene objects are rendered. On the basis of guaranteeing the rendering effect, the technical scheme is different from the existing rendering of all scene objects, the rendering calculation power consumption is effectively reduced by only rendering the visible scene objects, and in addition, the technical scheme is different from the existing mode of directly reducing the renderable scene objects through view cone rejection, can simply and effectively screen invisible objects at the current capturing position only by inquiring and matching the scene attribute information set, and effectively reduces the calculation power resource consumption of computer equipment.
In order to better improve the realization of scene rendering, technical improvement of filtering non-visible objects by judging the visibility of the objects of the scene before eliminating the view cone is also currently proposed, in the realization of the technical improvement, the visible object information of the three-dimensional scene needs to be confirmed first, the existing determination mode needs to combine rays to intersect with the geometric objects in the scene so as to detect whether collision exists between the ray points and the geometric objects, and if the collision exists, the collision is considered as the visible scene objects.
The processing method of the visible object judgment has the following problems: 1) The processing mode has a good judging effect only on some convex hull objects in the scene, but is difficult to effectively judge the objects in the hole scene; 2) The processing mode cannot guarantee the rationality of the sampling view angles, and when the visible scene objects are determined at different sampling view angles, the stability of the determination result is difficult to guarantee; 3) When intersecting the light with the geometric objects in the scene, the three-dimensional scene model needs to be subjected to complex tree structure management, so that the calculation power consumption is increased, and meanwhile, when intersecting some geometric objects with complex shapes, the stability of intersecting results cannot be ensured. Therefore, when the visible objects determined by the processing mode are filtered out of the invisible objects, the computing force resources cannot be better saved, and the accuracy of the filtering result cannot be ensured.
From the above description, it may be known that the above embodiment of the scene attribute information set is implemented, and as a first optional embodiment of the present embodiment, a step of determining a scene attribute information set of a target scene is further provided, where the scene attribute information set of the target scene may be determined at a creation stage of the target scene, and specifically, on the basis of the above optimization, the determining of the scene attribute information set may include the following steps:
a1 Block division is performed on the space region which can be traveled in the target scene model, and at least one subspace block is obtained.
In this embodiment, the target scene model may be considered as a three-dimensional space model presented on the corresponding creation interface in the target scene creation stage, and the scene creator may add scene objects in the created three-dimensional space model, where each scene object may be an independently created object material, and after the creation of the object material is completed, the object material may be added to the target scene model to form a scene object.
It should be noted that, all the object materials existing in the target scene model can be used as scene objects of the target scene, but the types of the scene objects are not limited, and the object materials can be any object which can be presented in the actual scene, such as a river, a road, flowers, plants, trees and other scenes, a wall, a building such as a house and the like, and can also be virtual scene characters. Furthermore, for a target scene that contains a large number of scene objects, the scene objects that are presented may be nested with sub-scenes, such as a house being one of the scene objects in the target scene, but the house interior being a travelable region is equivalent to a sub-scene that is nested inside the house. It follows that the greater the number of scene objects, the greater its complexity.
In this embodiment, the determination of the scene attribute information set is critical to the determination of the visible object information included in the scene attribute information set, and the visible object information is related to the capturing position of the virtual camera, and to determine the visible object information, sampling of the capturing position in the target scene model is considered. In conventional implementations, sampling points may be randomly selected directly in the target scene model as capture locations for the virtual camera.
However, this approach has the problem that sampling the selected capture locations is not effective. Specifically, the target scene model is a relatively complex large scene model, when random sampling is performed in the large scene model, the representativeness of the sampling result is not strong, in addition, the capturing position determined by random sampling may not have capturing meaning, for example, a wall which cannot travel is arranged in front of the capturing position, other scene objects cannot be captured in the capturing range of the virtual camera, and the reference meaning of the visible object information determined based on the capturing position is not great. That is, determining the virtual camera capture location by conventional means does not have good resulting stability.
In order to better solve the above-mentioned problems, the present embodiment firstly considers that the space region of the target scene is divided into a plurality of subspace blocks before sampling the capturing position of the virtual camera, and then samples the capturing position in each subspace block, and meanwhile, in order to ensure that the virtual camera has capturing meaning at each capturing position determined by sampling, the present embodiment further considers that the space region capable of traveling is determined in the target scene model before the target scene model is divided into regions, wherein the space capable of traveling can be understood as a virtual character in the target scene or a region capable of being walked or passed by an immersion experimenter in the target scene.
It will be appreciated that for applications such as games, virtual reality, etc. that are primarily intended to present a three-dimensional space scene, the capturing perspective of the virtual camera is primarily the player perspective, i.e., the capturing location of the virtual camera is often the location of the player in the three-dimensional space scene, and the location of the player is mostly the region of space that can be traveled in the three-dimensional space scene. Therefore, the embodiment considers the division of the subspace blocks of the space region which can be traveled in the target scene model, reduces the division quantity of the subspace block division, and ensures that the determined capturing position has capturing significance.
In this embodiment, the included space region capable of traveling may be determined according to the object attribute of the scene object included in the target scene model, for example, the object attribute may be viewed from the panoramic angle of the whole target scene model to be the road, river, block, square, etc., and the space region formed by these regions may be used as the space region capable of traveling.
The present embodiment may define a region size based on the positional information of the travelable spatial region in the target scene model, and form region attribute information (which may include the region start, end positions, and region size of the spatial region) characterizing the travelable spatial region.
In this embodiment, the partition information on which the subspace block partition depends may be determined by combining the determined regional attribute information of the travelable space region, and the partition information may be the block partition size of the subspace block to be partitioned, for example, assuming that a road forms a travelable space region, the regional size of the travelable space region may be the width and length of the road, and the embodiment may use half of the width of the road as the length, width and height of the subspace block, and the determined length, width and height value also serves as the block partition size of the subspace block in the travelable space region. The step may divide the subspace blocks for each of the travelable spatial regions according to the determined block division size, thereby obtaining at least one subspace block for each of the travelable spatial regions.
On the basis of the first optional embodiment, the block division may be further performed on the space region capable of being traveled in the target scene model, and the obtaining of at least one subspace block is specifically optimized as the following steps:
a11 Obtain a region of travelable space from the target scene model.
It is known that in the construction phase of the target scene, the constructed target scene is characterized by using the target scene model. The present embodiment can make determination of the travelable spatial region in the target scene model.
Specifically, the object materials added in the target scene model can be traversed, whether the object materials belong to scene objects which can be walked, driven in or climbed by players such as roads, rivers, squares and the like or not is determined through the object properties of the object materials, whether the object materials are scene objects which can enter the inner space such as buildings and the like or not is determined, and when the object materials with the properties are determined, corresponding space regions which can travel can be formed through the object materials. This step may obtain all the regions of space available for travel contained in the target scene model.
a12 Based on the region size information of the travelable spatial region, a block division size of the travelable spatial region is acquired.
In this embodiment, region attribute information representing regions in each space that can be traveled may be obtained, where the region attribute information includes region size information of the space that can be traveled, such as a length value that can be traveled by the space region and a width value, and the block division size when performing the sub-space block division is determined based on the width value or the length value of the space that can be traveled, for example, half of the width value may be used as a length, width, height, and length of the sub-space block to be divided.
In one implementation, the region size information of the space region may be used to uniformly divide the space region according to the region size information by the region size information of the space region and the number of rows and columns of the subspace blocks formed by the desired division, so that the length, width and height of each subspace block to be divided may be obtained as the block division size.
a13 Dividing the space region capable of being processed according to the block division size to obtain at least one subspace block.
Subspace block partitioning of the travelable spatial region can be achieved by this step. Each subspace block is provided with the above-determined block partition size.
The subspace blocks obtained by dividing the space region which can be traveled also have the property of being traveled, and the capturing positions of the virtual cameras are sampled in each subspace block, so that the stability of objects which can be contained in a scene in a captured picture at the capturing positions obtained by sampling is greatly improved.
b1 Determining the visible object material of the subspace block, and obtaining the visible object information of the subspace block.
In this embodiment, the object material may consider an object model on which the scene objects included in the target scene depend in the construction stage, where each scene object to be presented in the target scene needs to be independently constructed in the construction process, and after the construction, the object material may be used as the object material in the target scene model.
It should be noted that the present embodiment decomposes the determination of the visible object information contained in the scene attribute information set into the determination of the visible object information for each subspace block. Each subspace block can be regarded as an independent space in particular, and to determine the visible object information of the subspace block, it is necessary to determine which visible object material is included in the subspace block. However, when the virtual camera is at a different capture location in the subspace block, there is a difference in the captured object material within the capture range.
In this embodiment, to determine which visible object materials are included in the subspace block, sampling of capturing positions may be performed in the subspace block, and a certain number of sampling points are determined from the subspace block through given sampling logic (a central point of the subspace block is used as a fixed sampling point, and a certain number of sampling points are sampled randomly), where the certain number of sampling points are used as different capturing positions of the virtual camera in the subspace block. The corresponding visible object material at each capture location can then be determined by this step.
For visual object material determination of a virtual camera at a capture location, existing manners of determination may be described as: and (3) taking the capturing position as an origin, emitting light at the origin, and intersecting the light with a geometric object (object material in the target scene model) in a scene (the whole target scene model which is not divided by the subspace blocks), so as to detect whether collision exists between the light point and the geometric object, and if the collision exists, considering the geometric object (object material) as the visible object material of the virtual camera at the capturing position of the scene.
The conventional visible object determination method has the following problems: 1) The processing mode has a good judging effect only on some convex hull objects in the scene, but is difficult to effectively judge the objects in the hole scene; 2) The processing method cannot guarantee the rationality of the sampling view angles (subspace block division in the space region capable of advancing, which is not proposed by the embodiment), and the stability of the judging result is difficult to guarantee when the visible scene object is judged at different sampling view angles; 3) When intersecting the light with the geometric objects in the scene, the complex tree structure management is required to be carried out on the three-dimensional scene model (such as the target scene model), so that the calculation power consumption of calculation resources is increased, and meanwhile, when intersecting the geometric objects with complex shapes, the stability of the intersecting result cannot be ensured.
Based on this, in order to solve the above-described problems, the present embodiment improves the manner of determination of visible object materials, in addition to dividing the subspace blocks first before the determination of visible object materials. Specifically, for a target scene model containing various object materials, offline scene rendering can be performed on the target scene model to obtain color data information of each object material after rendering; for each subspace block, at a number of capture positions determined for the virtual camera by sampling, a corresponding scene texture map at each capture perspective when the virtual camera is at that capture position may be determined from the rendered scene texture map, regardless of the capture position at which the virtual camera is.
In view of the above description, the present embodiment may directly determine color data information included in a scene texture map, and further determine which object materials are included in the scene texture map based on a conversion relationship between the color data information and the object identifier. The object material contained in the scene texture map for each capture perspective may constitute the visible object material of the virtual camera at that capture location. The visible object materials determined by the virtual camera at each capturing position of the subspace block can form the visible object materials under the subspace block, and finally, the visible object information under the subspace block can be formed based on the related object information of each visible object material.
The determination of the visible object information is realized, and the problems existing in the conventional determination of the visible object information are effectively avoided.
On the basis of the first alternative embodiment, in order to better understand the determination of the visible object information under the subspace block, the embodiment may perform the steps of:
b11 And performing target scene rendering based on object materials contained in the target scene model to obtain corresponding color data information after the object materials are rendered.
In this embodiment, for the target scene model, after the creation of the target scene model is completed, the offline scene rendering may be performed on the target scene based on all the object materials added to the target scene model, so that color data information corresponding to each object material after rendering may be obtained.
It should be noted that, one implementation manner of the color data information of the object material in scene rendering may be described as follows: the object identification allocated when each object material is added to the target scene model is acquired (the value range of the object identification can be 0-65535), and then the object identification of the object material can be subjected to RGB8 or RGB32 color coding, so that the color data information of the corresponding object material is acquired.
Wherein, for each object material, after obtaining the object identifier thereof, the implementation process of performing RGB8 color coding on the object identifier can be described as:
r in the color data information may be equal to a floating point value of id/(255×255), where id is an object identifier of the object material;
g in the color data information can be equal to a floating point value of (id-R255/255), wherein R is the determined R channel color data information;
B in the color data information may be equal to a result value of id-R255-G255, where G is the above-determined G channel color data information.
In addition, in order to ensure that there is a distinction between the color data information of different object materials obtained by the above calculation method, in this embodiment, the object identifiers of two object materials located at adjacent positions in the target scene model may be taken as interval values, for example, 20 may be taken as interval values, and when the object identifier of one object material is 20, the object identifier allocated to the adjacent object material may be 40 or may be 0.
The above-mentioned RGB8 color coding of the object material is implemented as one of the rendering modes, and in order to ensure the rendering accuracy, the color data information of the object material may be obtained by adopting the RGB32 rendering mode, where the object IDs of the object material may be respectively used as the RGB values of the object material.
b12 Determining a center point of the subspace block based on the spatial coordinate information of the subspace block, and obtaining at least one sampling point in the subspace block through random scatter acquisition.
In the embodiment, after the object materials in the target scene model are rendered through the steps, color data information of each object material after rendering can be obtained; then for each subspace block, the sampling of the position can be achieved through this step, and the sampled position can be considered as the sampled virtual camera capturing position, and also can be considered as the position where the sampled virtual object is located.
Specifically, spatial coordinate information of the subspace block in a spatial coordinate system where the target scene model is located may be obtained, where the spatial coordinate information may include spatial coordinate values of each vertex of the subspace block, and by using the spatial coordinate information, a center point coordinate of the subspace block may be determined, so as to locate a center point of the subspace block. Then, random sampling in the subspace block may be performed in a random spread sampling manner, thereby obtaining at least one sampling point in the subspace block.
b13 And taking the center point and the sampling point as sampling positions respectively, and determining visible object materials corresponding to the sampling positions according to the color data information.
In this step, the determined center point and the sampling point may be used as the capturing position of the virtual camera in the subspace block, or the position where the virtual object is located, which is denoted as the sampling position in this embodiment. And then the step can be combined with the determined color data information of the object materials to determine the corresponding visible object materials when the image capturing is carried out at each sampling position.
As one implementation manner, the embodiment may specifically optimize the visual object material corresponding to the determining the sampling position according to the color data information as the following steps:
b131 According to the color data information, obtaining a scene texture map corresponding to the sampling position.
It can be known that after color data information of object materials in the target scene model is obtained through rendering operation, a scene texture map where the sampling position is capable of being presented can be formed by combining the difference of the sampling positions.
For example, when the picture rendering is performed at the sampling position, the picture rendering of 6 orientation viewing angles (upward viewing angle, downward viewing angle, leftward viewing angle, rightward viewing angle, forward viewing angle, and backward viewing angle of the virtual camera) may be performed in combination with the color data information of the object material, so that one scene texture map may be acquired for each of the 6 orientations. Fig. 1a to 1f are diagrams showing the effects of scene texture maps captured from different viewing angles at a certain position when determining a scene attribute information set in the scene rendering method according to the present embodiment. Of these, shown in fig. 1a is a forward view scene texture map 11, shown in fig. 1b is a backward view scene texture map 12, shown in fig. 1c is an upward view scene texture map 13, shown in fig. 1d is a downward view scene texture map 14, shown in fig. 1e is a left view scene texture map 15, and shown in fig. 1f is a right view scene texture map 16. It can be seen that the scene objects visible in the scene texture map corresponding to different viewing angles are not identical.
b132 According to the pixel value of the pixel point in the scene texture map, combining the conversion relation between the pixel value and the object mark, determining the object material contained in the scene texture map, and recording the object material as the visible object material.
In this embodiment, for each acquired scene texture map, the conversion operation may be performed on the color data information included in the scene texture map through the operation of this step. Specifically, the pixel value of the pixel point in the scene texture map under the RGB color space can be obtained, the color channel value of the pixel point under each RGB color channel can be obtained through the pixel value, and then the pixel value can be converted and calculated through the preset conversion relation between the pixel value under the RGB color space and the object identifier of the object material.
For example, the conversion relationship between the object identification and the object material and the color data information in the RGB color space can be expressed as: id=r×255×255+g×255+b, where id is an object identifier and RGB is a color data value of the object material in the RGB color channel, respectively.
In the above description, the calculation result of the conversion calculation is equivalent to the calculated object identifier, and it can be determined which object materials are included in the scene texture map through the object identifier. The contained object material may be considered visible object material at the orientation view of the sampling location. All the obtained object materials can be used as the visible object materials corresponding to the sampling position after the scene texture map corresponding to each orientation view angle at the sampling position is converted.
b14 Based on the object identification of the visible object material, visible object information of the subspace block is composed.
In this embodiment, for one subspace block, capturing position samples are performed in the subspace block, and after a plurality of sampling positions are obtained, the visible object materials in each sampling position can be determined by the method given in the steps b 131) and b 132).
Considering that object materials in the target scene model are distinguished through object identifications, for each subspace block, the object identifications of the visible object materials corresponding to each sampling position in the subspace block can be summarized, and then the visible object information of the subspace block is formed through each object identification.
In the implementation of the scene rendering method provided by the embodiment, the implementation of the introduced visible object information determination is different from the determination of the existing visible object information, no additional light required by intersection is required, and the key of the whole determination is that the conversion between the color data information of the scene object and the object identification does not relate to the geometric shape of the scene object, so that the problem that the prior art is difficult to perform visible determination on the hole scene object is avoided; meanwhile, when the capturing position is sampled, decoupling of a large target scene is considered, the large target scene is divided into a plurality of subspace blocks, and the center point of the subspace blocks is fixed when the capturing position is sampled in the subspace blocks, so that the capturing position is sampled more reasonably, the visible object is determined at the capturing position of the rationalized sampling, and the stability of a determination result is also ensured.
In addition, when the visible object information is determined, the tree structure of the target scene model is not involved, after the target scene model is decoupled into a plurality of subspace blocks, the visible object information of the subspace blocks can be simply determined by determining the color data information of the object material only by the target scene model and then based on the color data information and the given conversion relation. The whole implementation process does not occupy more computing resources, and the computing power consumption of the computing resources is greatly reduced. It can be seen that the problem existing in the existing visible object determination is effectively avoided through the improved visible object determination mode of the embodiment.
c1 Determining a scene attribute information set of a target scene corresponding to the target scene model, wherein the scene attribute information contains the visible object information.
It should be noted that, the visible object information at different sampling positions is key information included in the scene attribute information set, but considering the integrity of the target scene model and the specific implementation of determining the visible object information, the visible object information at different sampling positions cannot be recorded as minimum unit information.
In this embodiment, the visible object information corresponding to the subspace block is considered as the minimum unit information, and further, the visible object information of the subspace block is considered to be recorded in the scene attribute information set in consideration of the correlation of the subspace block with the space region which can be traveled, and the information of the subspace block and the information of the space region which can be traveled are also required to be recorded.
Thus, the description of the visible object information in the scene attribute information set is decomposed into: region attribute information of the movable space region, subspace block information of subspace blocks included in each movable space region, and visible object information corresponding to each subspace block. The visible object information corresponding to the subspace blocks can be represented by an object identification set of the visible object material, and the subspace block information can be represented by the number of subspace blocks contained in each direction in the movable space region and the interval information among the subspace blocks; the region attribute information may then represent the beginning and summary of the spatial region by the upper left and lower right corner positions of the entire spatial region.
In addition, the scene attribute information set may include, in addition to the visible object related information, object information of all object materials included in the target scene model, so that the scene attribute information set further includes object material information.
The above-mentioned related steps of the first alternative embodiment specifically describe implementation of determining the scene attribute information set corresponding to the target scene. The determined scene attribute information set can be used for screening whether the object information to be rendered is visible at the current rendering execution time, so that only visible scene objects in the object information to be rendered are reserved, and only the visible scene objects are rendered. The first alternative embodiment provides data support of core information for implementation of the method provided by the present embodiment.
On the basis of the first optional embodiment, as an implementation manner, a scene attribute information set for determining a target scene corresponding to the target scene model may be further determined, where the scene attribute information includes the visible object information and specifically includes the following steps:
c11 Object material information of the object material contained in the target scene model and region attribute information of the travelable spatial region are obtained.
In this embodiment, all object materials included in the target scene model may be recorded in the scene attribute information set, and specifically, object material information of the object materials may be obtained through this step, where the object material information includes at least an object identifier of the object material. It can be known that the object material added to the target scene model in the target scene creation stage is equal to the object of the scene to be presented in the target scene operation stage through rendering, and the object identification of the object material can also be regarded as the object identification of the scene object and can be used for distinguishing object information of different scenes.
In this embodiment, in the implementation of determining the visible object information, the sub-space block division is performed based on the travelable space region in the target scene model, and the region attribute information of each travelable space region determined from the target scene model may be obtained in this step. Illustratively, the region attribute information includes a region start position and a region end position of the region of the travelable space.
c12 Subspace block information of subspace blocks included in the travelable spatial region is obtained, and visible object information corresponding to the subspace blocks is obtained.
In this embodiment, subspace block information of subspace blocks formed by dividing in different space regions capable of advancing can be obtained through this step, and visible object information after visible object determination is performed on the subspace blocks can also be obtained. The subspace block information includes, for example, the number of subspace blocks in each spatial direction in the spatial coordinate system constructed by the space region that can be traveled, and the interval value between subspace blocks. At the same time, the relative travelable spatial region also corresponds to the same number of visible object identification sets as the subspaces. Each set of visible object identifications corresponds to visible object information of one subspace block.
c13 Summarizing the object material information, the region attribute information, the subspace block information and the visible object information of the subspace block to form a scene attribute information set of the target scene corresponding to the target scene model.
The step gathers the obtained information to form a scene attribute information set of the target scene. In practical applications, the scene attribute information set may be characterized in the form of a data table. The data table at least comprises an object identification set for representing object material information, upper left corner position information and lower right corner position information of an area for representing area attribute information, the number of subspace blocks for representing different space directions of subspace block information, subspace block intervals and most critical visible object identification sequence set for representing visible object information.
The description of the development of the specific content contained in the scene attribute information set is given, and the determination logic for determining the scene attribute information set is more clearly shown through the description.
As a second optional embodiment of the present embodiment, on the basis of the foregoing optimization, fig. 2 shows a schematic flow chart of a scene rendering method provided by the embodiment of the present disclosure, and as shown in fig. 2, the scene rendering method provided by the present embodiment may include the following steps:
S201, responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position.
For example, in the running state of the application associated with the target scene, the scene may be rendered in a frame unit or according to a set rendering period or a set rendering condition, the current rendering execution time may be considered as the current execution time of the scene rendering, and the step may respond to the scene rendering operation triggered by the current rendering execution time.
The execution subject of the method provided by the embodiment can obtain the object information to be rendered associated with the target scene and the current position such as the current capturing position of the virtual camera or the current position of the virtual object under the current rendering execution time through the related running information of the application associated with the target scene.
The following S202 to S205 provide specific implementation of the visibility determination of the object of the scene to be rendered, and it can be known that the following steps are applicable to all objects of the scene to be rendered, which are corresponding to the target scene and need to be subjected to the visibility determination.
S202, searching a scene attribute information set of the target scene, and acquiring object material information in the scene attribute information set.
In this embodiment, all object material information included in the scene attribute information set may be acquired through this step, that is, object identifiers of all object materials added to the target scene model in the creation stage may be acquired.
S203, judging whether the object information to be rendered is contained in the object material information, if yes, executing S204; if not, S205 is performed.
In this embodiment, the object identifier to be rendered included in the object information to be rendered may be compared with each object identifier included in the object material information, to determine whether the object identifier to be rendered is included in the object material information, and if the object identifier to be rendered is included in the object material information, S204 may be executed; otherwise, it may be considered that the scene object to be rendered does not belong to the object material added in the target scene model at the scene stage, and the operation of S205 may be performed.
S204, determining the visible scene object at the current rendering execution time according to the current position and the region attribute information, subspace block information and the visible object information of the subspace block in the scene attribute information set.
As the execution logic of the above-mentioned one determination result, this step may determine, when the object material information includes the object information to be rendered, the current rendering execution time by combining the region attribute information, the subspace block information, and the visible object information in the scene attribute information set, with the current position, and the visible scene object visible in the capturing range provided when the current position is located.
For example, the present embodiment may determine, through the current position and the region attribute information, which spatial region that the scene rendered position is currently located in, and in general, the current position changes according to the change of the player position, since the player can only move in the spatial region that can be traveled, it may be determined that the current position should belong to one of the spatial regions that can be traveled, but there is also a case that the current position is not located in the spatial region that can be traveled; and if the visible object information is in the space region capable of advancing, determining which subspace block the current position is in according to the subspace block information, and after determining the subspace block, using the visible object information corresponding to the subspace block as the visible object information corresponding to the current position. The visible object information contains object identifiers of a series of visible object materials, and when the object identifiers to be rendered in the object information to be rendered are determined to be in the visible object information, the object to be rendered can be considered as a visible scene object.
As one implementation manner, determining the visible scene object at the current rendering execution time according to the current position and the region attribute information, the subspace block information and the visible object information of the subspace block in the scene attribute information set may be embodied as:
a2 Comparing the current position with the region position of the movable space region in the region attribute information.
In this embodiment, the region position of the region attribute information of the region capable of traveling in space includes a region start position and a region end position, and if the current capturing position is within a space range formed by the region start position and the region end position, the comparison result is considered to be the region capable of traveling in which the current capturing position is in comparison.
It should be noted that, the current capturing position in this step may be considered to be compared with the region positions of the respective travelable space regions included in the scene attribute information set, and the comparison result may be considered to be in a feasible space region as long as the current capturing position is in one of the travelable space regions; if the current capture location is not in any of the travelable spatial regions, the scene object to be rendered may be directly considered as an invisible object.
b2 If the comparison result is in the space region which can be traveled, determining a target subspace block in which the current position is in the space region which can be traveled according to the subspace block information, and obtaining target visible object information of the target subspace block.
As execution logic of one of the determination results, the step may determine which subspace block is currently located by continuing to pass through subspace block information and the current position after determining that the subspace block is located in a certain space region, and take the subspace block as a target subspace block, so as to obtain target visible object information corresponding to the target subspace block.
c2 Under the condition that the object information to be rendered belongs to the target visible object information, taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution time.
After determining the target visible object information, the object identifier to be rendered may be compared with the target object identifier contained in the target visible object information, and if the object identifier to be rendered exists in the target visible object information, the object information to be rendered is considered to belong to the target visible object information. In this case, the corresponding object to be rendered may be used as the visible object at the current rendering execution time.
The technical description of the embodiment provides the implementation of the visibility determination when the object of the scene to be rendered belongs to the object material added in the creation stage of the target scene. As an implementation way for screening and determining visible scene objects of the scene to be rendered, the method provides technical support for whole scene rendering.
S205, taking the object to be rendered corresponding to the object information to be rendered as a visible scene object at the current rendering execution time.
The execution logic of the step as another determination result determined in the step S203 may directly use the object to be rendered as the visible scene object when it is determined that the object to be rendered does not belong to the object material added in the creation stage of the target scene.
The above technical description of the embodiment provides a realization of the visibility determination when the object of the scene to be rendered does not belong to the object material added in the creation stage of the target scene. As an implementation way for screening and determining visible scene objects of the scene to be rendered, the method also provides technical support for whole scene rendering.
And S206, rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
After determining all the visible scene objects in the scene objects to be rendered through the steps 202-205, only the visible scene objects can be rendered through the step, and a scene picture at the current rendering execution time is obtained.
The scene rendering method provided by the second optional embodiment adopts the predetermined scene attribute information set, so that the screening and the determination of the visible scene object can be simply and effectively performed, and the scene attribute information set determined by the mode of the embodiment ensures the stability and the rationality of the determination of the visible scene object and improves the accuracy of the determination of the visible scene object on the premise of saving the calculation resource and the calculation power consumption. The technical proposal is only visible rendering scene objects realize also effectively reducing the image rendering of the process is computationally intensive.
As a third optional embodiment of the present embodiment, the method provided in the present embodiment may further include, after determining the visible scene object at the current rendering execution time: and performing view cone eliminating processing on the visible scene objects, and taking the rest visible scene objects after the eliminating processing as new visible scene objects.
In this embodiment, after the visibility screening is performed on the objects to be rendered, the view cone removing operation may be performed on the screened objects to be rendered again before the objects to be rendered.
It can be understood that the objects of the scene to be rendered are not all object materials added in the target scene model in the target scene creation stage, and some scene objects can be dynamically added in the operation stage of the target scene related application, but the dynamically added scene objects are not necessarily suitable for rendering at the current rendering execution time, and the view cone rejection can be performed according to the view cone principle of the three-dimensional space picture presentation.
The method can take the remaining visible scene objects after the view cone removing process as new visible scene objects, and further render the visible scene objects based on the new visible scene objects to obtain a scene picture rendered at the current rendering execution time.
In the embodiment, the view cone rejection is performed after the visible scene object is determined, so that the computing resource and the computing force resource consumption of the view cone rejection are effectively reduced.
According to the scene rendering method provided by the embodiment of the disclosure, when scene rendering is realized, the CPU in the computer equipment can determine the visible scene objects required by rendering only by searching the scene attribute information set corresponding to the target scene based on the acquired object information to be rendered and the current position. The CPU only needs to provide computational power resources for information searching and matching to complete the determination of the visible scene objects. Compared with the existing determination of the visible scene object, the method has the advantages that rays are required to be intersected with the geometric object in the scene, and the whole intersection process adopts a tree structure to obtain the related information of the geometric object in the scene, so that the implementation of the execution logic greatly reduces the calculation power consumption of a CPU in the determination of the visible scene object, and greatly reduces the performance requirement of computer equipment.
Meanwhile, after the CPU determines the visible scene objects through the logic, only a call interface command of each visible scene object needs to be sent to the GPU, so that the GPU can realize picture rendering only based on each visible scene object. Compared with the prior rendering implementation, the CPU frequently sends the call interface command of each object to be rendered to the GPU, and the GPU performs picture rendering on all the objects to be rendered. The technology of the embodiment reduces the interface command call from the CPU to the GPU, effectively reduces the frequent interaction of the CPU and the GPU, and more importantly, also effectively reduces the excessive consumption of rendering computing force resources in the scene rendering implementation of the GPU, ensures the scene rendering speed and also reduces the influence on the performance of the computer equipment.
In addition, unlike the existing method that the CPU directly performs view cone removing operation on all objects to be rendered, in the embodiment, the CPU can only perform view cone removing on the screened visible scene objects, and the consumption of computational resources of the CPU is reduced.
Fig. 3 is a schematic structural diagram of a scene rendering device according to an embodiment of the present disclosure, where, as shown in fig. 3, the device includes: a response module 31, a determination module 32, and a rendering module 33, wherein,
the response module 31 is configured to obtain object information to be rendered and a current position in response to a scene rendering operation of the target scene at a current rendering execution time;
a determining module 32, configured to find a set of scene attribute information related to the target scene according to the object information to be rendered and the current position, and determine a visible scene object at the current rendering execution time, where the set of scene attribute information includes visible object information related to the position;
and the rendering module 33 is configured to render the visible scene object, and present a scene corresponding to the target scene at the current rendering execution time.
According to the scene rendering device, the executed method logic can be used for screening whether the object information to be rendered is visible at the current rendering execution time or not by searching the scene attribute information set of the target scene at the rendering execution time of scene rendering, so that only visible scene objects in the object information to be rendered are reserved, and only the visible scene objects are rendered. On the basis of guaranteeing the rendering effect, the technical scheme is different from the existing rendering of all scene objects, the realization that only the visible scene objects are rendered effectively reduces the rendering calculation power consumption of images, and in addition, the technical scheme is different from the existing method of directly reducing the renderable scene objects through view cone rejection, can simply and effectively screen invisible objects at the current position only through the method of inquiring and matching the scene attribute information set, and effectively reduces the calculation power consumption of calculation resources.
Further, the apparatus further comprises:
the region division module is used for dividing the block of the space region which can be moved in the target scene model to obtain at least one subspace block;
the visible object determining module is used for determining visible object materials of the subspace blocks and obtaining visible object information of the subspace blocks;
and the information set determining module is used for determining a scene attribute information set of a target scene corresponding to the target scene model, wherein the scene attribute information contains the visible object information.
Further, the area dividing module may specifically be configured to:
obtaining a travelable spatial region from the target scene model;
acquiring the block division size of the movable space region based on the region size information of the movable space region;
and dividing the space region which can be processed according to the block division size to obtain at least one subspace block.
Further, the visible object determining module may specifically include:
the color data obtaining unit is used for carrying out target scene rendering based on object materials contained in the target scene model, and obtaining corresponding color data information after the object materials are rendered;
The sampling point determining unit is used for determining a center point of the subspace block based on the space coordinate information of the subspace block, and acquiring at least one sampling point in the subspace block through random scattering acquisition;
the visible object determining unit is used for taking the center point and the sampling point as sampling positions respectively and determining visible object materials corresponding to the sampling positions according to the color data information;
and the information determining unit is used for forming the visible object information of the subspace block based on the object identification of the visible object material.
Further, the visible object determining unit is specifically configured to:
taking the center point and the sampling points as sampling positions respectively;
obtaining a scene texture map corresponding to the sampling position according to the color data information;
and determining object materials contained in the scene texture map according to pixel values of pixel points in the scene texture map and combining the conversion relation between the pixel values and object identifications, and recording the object materials as visible object materials.
Further, the information set determining module may specifically be configured to:
obtaining object material information of object materials contained in the target scene model and region attribute information of a movable space region;
Obtaining subspace block information of subspace blocks contained in the movable space region, and obtaining visible object information corresponding to the subspace blocks;
and summarizing the object material information, the region attribute information, the subspace block information and the visible object information of the subspace block to form a scene attribute information set of the target scene corresponding to the target scene model.
Further, the determining module 32 may specifically include:
the searching unit is used for searching a scene attribute information set of the target scene and acquiring object material information in the scene attribute information set;
and the first execution unit is used for determining the visible scene object at the current rendering execution moment according to the current position and the visible object information of the region attribute information, the subspace block information and the subspace block in the scene attribute information set when the object information to be rendered is contained in the object material information.
A second execution unit for
Further, the determining module 32 may specifically further include:
and the third execution unit is used for taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution time when the object information to be rendered is not contained in the object material information.
Further, the second execution unit may specifically be configured to:
comparing the current position with the region position of the movable space region in the region attribute information;
if the comparison result is in the space region capable of advancing, determining a target subspace block in which the current position is in the space region capable of advancing according to the subspace block information, and obtaining target visible object information of the target subspace block;
and under the condition that the object information to be rendered belongs to the target visible object information, taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution moment.
Further, the apparatus may further include:
and the view cone eliminating module is used for carrying out view cone eliminating processing on the visible scene object after the visible scene object at the current rendering execution time is determined, and taking the rest visible scene object after the eliminating processing as a new visible scene object.
The scene rendering device provided by the embodiment of the disclosure can execute the scene rendering method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 4, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 4) 400 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An edit/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure and the scene rendering method provided by the foregoing embodiment belong to the same inventive concept, and technical details not described in detail in the present embodiment may be referred to the foregoing embodiment, and the present embodiment has the same beneficial effects as the foregoing embodiment.
The present disclosure provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the scene rendering method provided by the above embodiments.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position; searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position; rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example one ], the method comprising:
responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position;
searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method in which the determining step of the scene attribute information set may include:
performing block division on a space region capable of advancing in the target scene model to obtain at least one subspace block;
determining visible object materials of the subspace blocks, and obtaining visible object information of the subspace blocks;
and determining a scene attribute information set of a target scene corresponding to the target scene model, wherein the scene attribute information contains the visible object information.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example three ], the method may further include:
optionally, the partitioning the feasible spatial region in the target scene model to obtain at least one subspace block includes:
obtaining a travelable spatial region from the target scene model;
acquiring the block division size of the movable space region based on the region size information of the movable space region;
and dividing the space region which can be processed according to the block division size to obtain at least one subspace block.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example four ], the method may further include:
optionally, the determining the visible object material of the subspace block, to obtain the visible object information of the subspace block, includes:
performing target scene rendering based on object materials contained in the target scene model, and obtaining corresponding color data information after the object materials are rendered;
determining a central point of the subspace block based on the space coordinate information of the subspace block, and acquiring at least one sampling point in the subspace block through random scattering acquisition;
Taking the center point and the sampling point as sampling positions respectively, and determining visible object materials corresponding to the sampling positions according to the color data information;
and forming the visible object information of the subspace block based on the object identification of the visible object material.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example five ], the method comprising:
optionally, the determining, according to the color data information, the visible object material corresponding to the sampling position includes:
obtaining a scene texture map corresponding to the sampling position according to the color data information;
and determining object materials contained in the scene texture map according to pixel values of pixel points in the scene texture map and combining the conversion relation between the pixel values and object identifications, and recording the object materials as visible object materials.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example six ], the method comprising:
optionally, the determining the scene attribute information set of the target scene corresponding to the target scene model, where the scene attribute information includes the visible object information includes:
Obtaining object material information of object materials contained in the target scene model and region attribute information of a movable space region;
obtaining subspace block information of subspace blocks contained in the movable space region, and obtaining visible object information corresponding to the subspace blocks;
and summarizing the object material information, the region attribute information, the subspace block information and the visible object information of the subspace block to form a scene attribute information set of the target scene corresponding to the target scene model.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example seven ], the method comprising:
optionally, the searching the set of scene attribute information of the target scene through the object information to be rendered and the current position, and determining the visible scene object at the current rendering execution time includes:
searching a scene attribute information set of the target scene, and acquiring object material information in the scene attribute information set;
and if the object information to be rendered is contained in the object material information, determining the visible scene object at the current rendering execution time according to the current position and the visible object information of the region attribute information, the subspace block information and the subspace block in the scene attribute information set.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example eight ], the method further comprising:
optionally, if the object information to be rendered is not included in the object material information, the object to be rendered corresponding to the object information to be rendered is used as the visible scene object at the current rendering execution time.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example nine ], the method comprising:
optionally, determining the visible scene object at the current rendering execution time according to the current position and the region attribute information, the subspace block information and the visible object information of the subspace block in the scene attribute information set includes:
comparing the current position with the region position of the movable space region in the region attribute information;
if the comparison result is in the space region capable of advancing, determining a target subspace block in which the current position is in the space region capable of advancing according to the subspace block information, and obtaining target visible object information of the target subspace block;
and under the condition that the object information to be rendered belongs to the target visible object information, taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution moment.
According to one or more embodiments of the present disclosure, there is provided a scene rendering method [ example ten ], the method comprising:
optionally, after determining the visible scene object at the current rendering execution time, the method further includes:
and performing view cone eliminating processing on the visible scene objects, and taking the rest visible scene objects after the eliminating processing as new visible scene objects.
According to one or more embodiments of the present disclosure, there is provided a scene rendering apparatus [ example fourteen ], the apparatus comprising:
the response module is used for responding to the scene rendering operation of the target scene at the current rendering execution time and obtaining the object information to be rendered and the current position;
the determining module is used for searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
and the rendering module is used for rendering the visible scene object and presenting a scene picture corresponding to the target scene at the current rendering execution time.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. A method of scene rendering, comprising:
responding to scene rendering operation of a target scene at the current rendering execution time, and acquiring object information to be rendered and the current position;
Searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
rendering the visible scene object, and presenting a scene picture corresponding to the target scene at the current rendering execution time.
2. The method of claim 1, wherein the determining of the scene attribute information set comprises:
performing block division on a space region capable of advancing in the target scene model to obtain at least one subspace block;
determining visible object materials of the subspace blocks, and obtaining visible object information of the subspace blocks;
and determining a scene attribute information set of a target scene corresponding to the target scene model, wherein the scene attribute information contains the visible object information.
3. The method of claim 2, wherein the block partitioning of the region of the travelable space in the target scene model to obtain at least one subspace block comprises:
obtaining a travelable spatial region from the target scene model;
Acquiring the block division size of the movable space region based on the region size information of the movable space region;
and dividing the space region which can be processed according to the block division size to obtain at least one subspace block.
4. The method of claim 2, wherein said determining visible object material of said subspace block, obtaining visible object information of said subspace block, comprises:
performing target scene rendering based on object materials contained in the target scene model, and obtaining corresponding color data information after the object materials are rendered;
determining a central point of the subspace block based on the space coordinate information of the subspace block, and acquiring at least one sampling point in the subspace block through random scattering acquisition;
taking the center point and the sampling point as sampling positions respectively, and determining visible object materials corresponding to the sampling positions according to the color data information;
and forming the visible object information of the subspace block based on the object identification of the visible object material.
5. The method of claim 4, wherein determining the visible object material corresponding to the sampling location based on the color data information comprises:
Obtaining a scene texture map corresponding to the sampling position according to the color data information;
and determining object materials contained in the scene texture map according to pixel values of pixel points in the scene texture map and combining the conversion relation between the pixel values and object identifications, and recording the object materials as visible object materials.
6. The method according to claim 2, wherein determining the scene attribute information set of the target scene corresponding to the target scene model includes:
obtaining object material information of object materials contained in the target scene model and region attribute information of a movable space region;
obtaining subspace block information of subspace blocks contained in the movable space region, and obtaining visible object information corresponding to the subspace blocks;
and summarizing the object material information, the region attribute information, the subspace block information and the visible object information of the subspace block to form a scene attribute information set of the target scene corresponding to the target scene model.
7. The method according to claim 1, wherein the searching for a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining the visible scene object at the current rendering execution time, includes:
Searching a scene attribute information set of the target scene, and acquiring object material information in the scene attribute information set;
and if the object information to be rendered is contained in the object material information, determining the visible scene object at the current rendering execution time according to the current position and the visible object information of the region attribute information, the subspace block information and the subspace block in the scene attribute information set.
8. The method as recited in claim 7, further comprising:
and if the object information to be rendered is not contained in the object material information, taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution time.
9. The method of claim 7, wherein determining the visible scene object at the current rendering execution time based on the current location and the region attribute information, subspace block information, and visible object information of a subspace block in the scene attribute information set, comprises:
comparing the current position with the region position of the movable space region in the region attribute information;
If the comparison result is in the space region capable of advancing, determining a target subspace block in which the current position is in the space region capable of advancing according to the subspace block information, and obtaining target visible object information of the target subspace block;
and under the condition that the object information to be rendered belongs to the target visible object information, taking the object to be rendered corresponding to the object information to be rendered as the visible scene object at the current rendering execution moment.
10. The method of any of claims 1-9, further comprising, after determining the visible scene object at the current rendering execution time:
and performing view cone eliminating processing on the visible scene objects, and taking the rest visible scene objects after the eliminating processing as new visible scene objects.
11. A scene rendering device, comprising:
the response module is used for responding to the scene rendering operation of the target scene at the current rendering execution time and obtaining the object information to be rendered and the current position;
the determining module is used for searching a scene attribute information set relative to the target scene through the object information to be rendered and the current position, and determining a visible scene object at the current rendering execution time, wherein the scene attribute information set comprises visible object information related to the position;
And the rendering module is used for rendering the visible scene object and presenting a scene picture corresponding to the target scene at the current rendering execution time.
12. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-10.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-10.
CN202310063943.8A 2023-01-12 2023-01-12 Scene rendering method, device, equipment and storage medium Pending CN116059632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310063943.8A CN116059632A (en) 2023-01-12 2023-01-12 Scene rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310063943.8A CN116059632A (en) 2023-01-12 2023-01-12 Scene rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116059632A true CN116059632A (en) 2023-05-05

Family

ID=86181621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310063943.8A Pending CN116059632A (en) 2023-01-12 2023-01-12 Scene rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116059632A (en)

Similar Documents

Publication Publication Date Title
TWI525303B (en) A method and apparatus for self-adaptively visualizing location based digital information
CN108668108B (en) Video monitoring method and device and electronic equipment
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
WO2023020239A1 (en) Special effect generation method and apparatus, electronic device and storage medium
CN111127603B (en) Animation generation method and device, electronic equipment and computer readable storage medium
CN107835403B (en) Method and device for displaying with 3D parallax effect
CN113205601A (en) Roaming path generation method and device, storage medium and electronic equipment
CN111862349A (en) Virtual brush implementation method and device and computer readable storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
WO2023193639A1 (en) Image rendering method and apparatus, readable medium and electronic device
WO2023207354A1 (en) Special effect video determination method and apparatus, electronic device, and storage medium
CN116059632A (en) Scene rendering method, device, equipment and storage medium
CN115358958A (en) Special effect graph generation method, device and equipment and storage medium
WO2021143310A1 (en) Animation generation method and apparatus, electronic device, and computer-readable storage medium
CN109472873B (en) Three-dimensional model generation method, device and hardware device
CN111223105B (en) Image processing method and device
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN112884787B (en) Image clipping method and device, readable medium and electronic equipment
CN111200705A (en) Image processing method and device
EP4290469A1 (en) Mesh model processing method and apparatus, electronic device, and medium
CN115937383B (en) Method, device, electronic equipment and storage medium for rendering image
CN116301496A (en) Special effect information display method and device, electronic equipment and storage medium
CN117389502A (en) Spatial data transmission method, device, electronic equipment and storage medium
CN115994965A (en) Virtual object processing method, device, equipment and storage medium
CN117671476A (en) Image layering method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination