CN118052923B - Object rendering method, device and storage medium - Google Patents
Object rendering method, device and storage medium Download PDFInfo
- Publication number
- CN118052923B CN118052923B CN202410452673.4A CN202410452673A CN118052923B CN 118052923 B CN118052923 B CN 118052923B CN 202410452673 A CN202410452673 A CN 202410452673A CN 118052923 B CN118052923 B CN 118052923B
- Authority
- CN
- China
- Prior art keywords
- rendering
- target
- buffer area
- target block
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 140
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012545 processing Methods 0.000 claims abstract description 15
- 239000013598 vector Substances 0.000 claims description 36
- 230000000007 visual effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 238000000638 solvent extraction Methods 0.000 claims description 6
- 230000001105 regulatory effect Effects 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 7
- 230000004044 response Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The application discloses an object rendering method, device and storage medium, wherein the scheme of the application is characterized in that scene information is regulated and managed in blocks by dynamic LOD, meanwhile, the relative position relation between each block and a virtual target object and the motion parameter of the virtual target object are combined to determine the target block and the corresponding rendering parameter, and corresponding block resources are loaded and unloaded dynamically, so that the rendering quality is ensured, the processing amount of rendered data is reduced, the consumption of resources is reduced, the rendering efficiency and speed are improved, and particularly, the performance and the resource use can be effectively balanced when a large-scale three-dimensional scene is processed, and seamless user experience is provided.
Description
Technical Field
The present application relates to the field of computer graphics processing technologies, and in particular, to an object rendering method, apparatus, and storage medium.
Background
With the rapid development of computer graphics and virtual reality technology, virtual large-scale scenes are widely applied, and in particular, the rendering demands of the fields of medium-scale and large-scale games and simulation on high-quality and large-scale three-dimensional scenes are increasing. In order to bring a better visual experience to the user, efficient rendering of complex scenes is needed, however, in the case of limited hardware resources, how to efficiently manage and render large amounts of data becomes a key technical challenge.
Disclosure of Invention
Based on the above, the application provides a resource rendering scheme aiming at the problems, which aims to reduce the processing amount of rendered data and reduce the consumption of resources so as to improve the rendering efficiency and response speed and improve the experience of users.
In one aspect, the present application provides an object rendering method, including:
constructing a three-dimensional virtual scene model;
dividing the virtual scene into a plurality of blocks, and marking each block;
Acquiring position information and speed vector information of a target virtual object in real time;
determining a target block and a corresponding rendering LOD value according to the target virtual object position information and the speed vector information;
rendering each target block according to each corresponding rendering LOD value, and loading and storing the target blocks in a display buffer area;
and refreshing rendering data of each target block in the display cache.
Further, the determining the target block includes:
obtaining a current visual field boundary according to the position of the virtual object and visual field parameters of the virtual camera;
and judging whether each block is intersected with the current view boundary, and if so, determining the block as a target block.
Further, the method further comprises:
Judging whether the relative distance between each target block and the target virtual object is greater than a preset first threshold value or not;
And if the value is larger than the value, unloading LOD rendering resources of the corresponding block, and releasing the memory of the corresponding block.
Specifically, the rendering LOD value corresponding to the target virtual object position information and the speed vector information includes:
obtaining vectors of current positions of target blocks relative to target virtual objects ;
Calculating a rendering LOD value corresponding to each target block, wherein the rendering LOD value corresponding to each target block satisfies the following conditions:
Wherein, For the LOD value of the i-th target block,For the maximum visible distance of the current deviceFor the vector of the current position of the i-th target block relative to the target virtual object,Is thatIs provided with a die for the mold,For the current velocity vector of the target virtual object, epsilon is the adjustment parameter,In order to make the lower-level rounding,And lambda is a preset second threshold value and is a preset maximum LOD value.
Further, the method further comprises:
Judging whether the difference of rendering LOD values between the boundaries of each target block exceeds a preset threshold value;
If yes, determining a boundary rendering LOD value in an interpolation mode:
Wherein, As a result of the interpolation of the boundary,AndThe low LOD value and the high LOD value corresponding to the boundary are respectively, and t is an interpolation parameter.
Specifically, the refreshing the rendering data of each target block in the display buffer further includes:
establishing a first cache region and a second cache region;
taking the first buffer area as a current display frame buffer area and the second buffer area as a next display frame buffer area;
storing each rendered target block of the current display frame into the first buffer area;
multiplexing rendering target blocks which are unchanged between the current display frame and the next display frame, and storing the rendering target blocks in the second buffer area;
And re-rendering and storing the target block with the changed next display frame in the second buffer area.
Further, the method further comprises:
When the display of the next frame is started, the second buffer area is used as a new current display frame buffer area, and the first buffer area is used as a new next display frame buffer area;
The first buffer area is emptied, and the rendering target block which is unchanged between the current display frame and the new next display frame is multiplexed and stored in the first buffer area;
re-rendering and storing the target block with the changed next display frame to the first buffer area;
re-rendering and storing a new target block of a new next display frame and a changed target block into the first buffer area;
and alternately and circularly storing rendering target block data of the first buffer area and the second buffer area.
The second aspect of the present application also provides a rendering processing apparatus, characterized in that the apparatus includes:
the building module is used for building a three-dimensional virtual scene model;
the partitioning module is used for partitioning the virtual scene into a plurality of blocks and marking each block;
The processing module is used for acquiring the position information and the speed vector information of the target virtual object in real time, and determining a target block and a corresponding rendering LOD value according to the position information and the speed vector information of the target virtual object;
The rendering loading module is used for rendering each target block according to each corresponding rendering LOD value and loading and storing the target blocks in the display buffer area;
And the display module is used for refreshing rendering data of each target block in the display buffer.
A third aspect of the application provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of any of the methods described above.
A fourth aspect of the application provides a computer terminal device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of any of the methods described above.
According to the optimization scheme for fusing the dynamic LOD system and the partitioned loading and unloading rendering technology, scene information is regulated and partitioned through dynamic LOD, the relative position relation of each block and the virtual target object and the motion parameters of the virtual target object are combined to determine the target block and the corresponding rendering parameters, and the corresponding block resources are loaded and unloaded dynamically, so that the rendering quality is ensured, the processing amount of rendered data is reduced, the consumption of resources is reduced, the rendering efficiency and the rendering speed are improved, and particularly when a large-scale three-dimensional scene is processed, the performance and the resource use can be effectively balanced, and seamless user experience is provided.
Furthermore, when the scheme of the application is used for displaying, the first buffer area and the second buffer area alternately multiplex part of rendering blocks, so that the load of the GPU is further reduced, and the rendering display efficiency and the response speed are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow diagram of a method of rendering objects in one embodiment;
FIG. 2 is a block diagram of an object rendering processing apparatus in one embodiment;
FIG. 3 is a block diagram of a computer device in one embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is noted that the terms "comprising," "including," and "having," and any variations thereof, in the description and claims of the application and in the foregoing figures, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. In the claims, specification, and drawings of the present application, relational terms such as "first" and "second", and the like are used solely to distinguish one entity/operation/object from another entity/operation/object without necessarily requiring or implying any actual such relationship or order between such entities/operations/objects.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In one embodiment, as shown in fig. 1, a flowchart of an object rendering method of the present application is shown, the method includes:
S10, constructing a three-dimensional virtual scene model.
In particular, individual objects in a scene, such as a game scene, may include buildings, terrain, vegetation, objects, etc., may be created by using 3D modeling software (e.g., 3ds Max, maya, blender, etc.), with the modeling requiring attention to the proportions, shape, and structure of the objects to ensure that they appear to be authentic and conform to the overall style of the scene. While texture editors may be used to create or modify textures that are then applied to the surface of the model. The map may enhance the realism of the model, making it look more vivid and detailed.
S11, dividing the virtual scene into a plurality of blocks, and marking each block.
Specifically, the size of a block may be determined by the scale and the level of detail in the virtual scene. The blocks may be regular cubes or irregular shapes, e.g., according to the size and complexity of the scene, divided into a plurality of logically or physically independent blocks, which may be natural divisions based on terrain, buildings, or other features, or may be divided according to a predetermined size. For example, in a game scenario, a boundary or bounding box (Bounding Box) may be defined for each tile to quickly determine if a player has entered a new tile.
After dividing the scene into a plurality of blocks, each block is defined with its position and boundaries in the world coordinate system, one embodiment of the application by marking the world coordinates of the center of each block.
S12, acquiring the position information and the speed vector information of the target virtual object in real time.
In particular, in a game, the target virtual object may be a virtual character object selected by the current player, and the control and monitoring of the game object may be facilitated by the use of various game engines, such as Unity, unreal Engine, etc., which provide a series of tools and APIs to assist the developer in achieving control and monitoring of the game object.
In most game engines, each game object has a "Transform" component that contains Position, rotation, and Scale information of the object. By accessing this component, the developer can obtain the current location of the object and track the location and direction of movement of the player object.
A velocity vector refers to the amount of change in position of an object at a particular time, and includes the magnitude and direction of the velocity. In one embodiment of the application, the velocity vector may be calculated by calculating the difference in position between two consecutive points in time, in combination with the time interval.
S13, determining a target block and a corresponding rendering LOD value according to the target virtual object position information and the speed vector information.
Specifically, the target blocks are sets of corresponding blocks required to be displayed on a screen under the position view of the current virtual object, LOD is level of Detail, and the virtual scene is divided into different Detail Levels by adopting different rendering modes, the rendering Detail Levels of the objects of each target block are dynamically adjusted according to the relative parameters of each target block and the virtual object, rendering resources can be effectively managed, and rendering details are higher when the LOD value of the target block is larger.
The method comprises the following steps: s131, determining a target block according to the target virtual object position information; and S132, determining a rendering LOD value corresponding to the target block according to the target virtual object position information and the speed vector information.
In order to determine the target area to be displayed on the screen according to the current target virtual object position, the concept of a virtual camera needs to be introduced.
In particular, in virtual scenes or games, a "virtual camera" is a virtual concept for simulating a player's perspective, capturing and rendering a scene in the game world, the camera typically being provided by a game engine, which is an important component of the rendering pipeline. The position of the camera and other parameters (such as view angle, near plane and far plane) determine which objects in the game world will be rendered onto the screen.
Wherein S131, determining a target block according to the target virtual object position information and the speed vector information, includes:
S1311, obtaining a current visual field boundary according to the position of the virtual object and the visual field parameters of the virtual camera.
In one embodiment of the present application, mapping a field of view of a virtual object with a field of view of a virtual camera, and in order to obtain a current field of view boundary of the virtual camera, mapping position information of the virtual camera with a position correspondence of the virtual object, the obtaining a current field of view boundary specifically includes:
Obtaining vision parameters of a preset virtual camera, wherein the vision parameters comprise a vision angle (Field of View, FOV), a near clipping plane (NEAR CLIPPING PLANE) and a far clipping plane (FAR CLIPPING PLANE) of the camera, obtaining three-dimensional coordinates of a player virtual object by accessing a 'Transform' component of the player character virtual object, and calculating the vision boundary of the virtual camera by taking the current virtual object position as the position center of the virtual camera correspondingly:
F(W) = 2 * tan(FOV/2) * D;
F(H) =F(W) /R;
where F (W) is the width of the field boundary, F (H) is the height of the field boundary, D is the distance from the cutting plane, R is the aspect ratio, which is the width divided by the height of the camera view port, and is generally determined by the resolution of the game window or screen, and the distance from the camera to the furthest point of the field boundary can be adjusted according to the game design requirements.
S1312, judging whether the current view boundary is intersected with each block, if so, determining the current view boundary as a target block.
Specifically, after determining the view boundary, in order to determine which blocks enter the view, the embodiment of the present application determines whether the current view boundary intersects each block, which specifically includes:
The field of view boundaries and the block boundaries are converted into some form of volumetric representation that is amenable to intersection testing, such as AABB (axis aligned bounding box) or OBB (oriented bounding box). A suitable intersection algorithm is used to test whether the volume of the field of view boundary intersects the volume of each tile, and for AABB, a simple Box-Box intersection test (Box-Box Intersection Test) can be used.
Further, since there may be a large number of blocks in the three-dimensional virtual scene, performing the intersection test directly on each block may cause a performance degradation problem, a space division technique (such as octree, quadtree) may be used to reduce the number of blocks to be tested, or a rough intersection test may be performed first, for example, only testing the relationship between the center point of the block and the view boundary.
If crossing, determining a target block: if a field of view boundary is found to intersect a block, then the block may be considered a "target block" and the identifiers or other relevant information of the blocks may be recorded for subsequent use by the block logic, such as loading the block's resources, triggering events, update status, etc.
Further, in order to better save resource overhead and accelerate resource utilization rate, rendering efficiency and response speed, the scheme of the application adopts a dynamic loading and unloading mechanism to dynamically load and unload blocks according to the position and visual field of a player, and when the movement of a virtual object role of the player changes, a target block is redetermined to load or unload corresponding block data, wherein the unloading target block specifically comprises:
s1313, judging whether the relative distance between each target block and the virtual object is greater than a preset first threshold; and if the value is larger than the value, unloading LOD rendering resources of the corresponding block, and releasing the memory of the corresponding block.
Specifically, offloading a block involves freeing up resources in memory, including textures, grids, illumination data, and the like. In the embodiment of the application, the determined target block is rendered and stored in a preset buffer area, and in order to avoid performance overhead caused by frequent loading and unloading, the block resource of the buffer area is really unloaded only when the relative distance between the target block and the virtual object is greater than a preset first threshold, wherein the first threshold can be determined according to actual needs, for example, the distance of the visual field boundary can be used as a reference. For each block, the distance between the center point of the block and the current position of the virtual object is calculated by using a Euclidean distance formula.
Further, S132 determines a rendering LOD value corresponding to the target block according to the target virtual object position information and the speed vector information, including:
s1321, obtaining vectors of the current positions of the target blocks relative to the target virtual object 。
Specifically, after three-dimensional position coordinate information Q of the target virtual object is obtained in real time, a vector of the current position of each target block relative to the target virtual object is formed by taking the central coordinate position coordinate Pi' of each target block as a starting point and taking the three-dimensional position coordinate information Q of the obtained target virtual object as an end point. Wherein the vector isThe representation can be based on a coordinate method, or the geometric representation can be used for conversion.
S1322, calculating a rendering LOD value corresponding to each target block according to the target virtual object position information and the speed vector information, wherein the rendering LOD value corresponding to each target block satisfies the following conditions:
Wherein, For the LOD value of the i-th target block,For the maximum visual distance of the current device,For the vector of the current position of the i-th target block relative to the target virtual object,Is thatIs provided with a die for the mold,For the current velocity vector of the target virtual object, epsilon is the adjustment parameter,In order to make the lower-level rounding,And lambda is a preset second threshold value and is a preset maximum LOD value.
In the determination mode of the rendering LOD value corresponding to each target block, the relative distance relation of each target block relative to the target virtual object is considered, and the current speed vector of the virtual object, namely the current speed and direction of the virtual object, is fully considered, so that the target block in the moving direction of the virtual object is endowed with a relatively larger LOD value, the target block in the moving direction of the virtual object is endowed with a relatively smaller LOD value, and the target block in the moving direction can adopt a finer rendering mode. Thus, compared with the method for statically determining the LOD value of each block only by using the relative distance, the method dynamically corrects the corresponding LOD value in advance according to the movement speed and direction of the virtual object, and can reduce the variation amplitude of the subsequent LOD value caused by movement variation, thereby reducing the processing amount of data and being more suitable for the visual variation caused by the movement of the virtual object.
Further, in one embodiment, to solve the problem of unnatural vision between target blocks due to variation of rendering LOD values, a special smoothing algorithm is applied to ensure visual consistency of blocks of different LOD levels, and the method further includes:
S1323, judging whether the difference of the rendering LOD values between the boundaries of the target blocks exceeds a preset threshold value;
S1324, if yes, determining a boundary rendering LOD value in an interpolation mode:
Wherein, As a result of the interpolation of the boundary,AndAnd respectively obtaining a low LOD value and a high LOD value corresponding to the boundary, wherein t is an interpolation parameter, and t is (0, 1).
With the above embodiments of the present application, the system dynamically adjusts the LOD according to the distance between each tile and the virtual object and the state in the field of view. For the target block to be entered, the system will gradually increase the LOD to provide finer details; for the exiting target block, the system gradually decreases LOD to reduce resource consumption. Further, the system uses interpolation techniques to smoothly transition the LOD, avoiding visually abrupt changes. The method and the device can provide rich and smooth experience for the user while maintaining high rendering efficiency, and can also well optimize the use of resources.
And S14, rendering each target block according to each corresponding rendering LOD value, and loading and storing the target blocks in a display buffer area.
Specifically, in one embodiment of the present application, when each target block is rendered, a rendering queue is created first, the rendering order is arranged according to the priority of the block (such as distance from the camera, importance, etc.), the rendering queue is ensured to only include the block currently required to be rendered, and illumination calculation is performed on each target block, so as to save the rendering resources. When rendering is executed, performing rendering operation on each target block according to the sequence of a rendering queue, converting 3D coordinates of the block into a screen space by using a view and a projection matrix, loading and storing rendered target block data into a corresponding display buffer area, and specifically comprising the following steps:
S141, establishing a first buffer area and a second buffer area.
The scheme of the application adopts a multi-buffer strategy to adapt to the display position of each target block on the screen determined according to the current virtual object moving direction, and multiplexes the rendered unchanged blocks, thereby saving the processing amount of rendering data, reducing unnecessary resource consumption and improving the response speed.
S142, taking the first buffer area as a current display frame buffer area and the second buffer area as a next display frame buffer area.
S143, storing each rendered target block of the current display frame into the first buffer.
S144, multiplexing rendering target blocks which are unchanged between the current display frame and the next display frame, and storing the rendering target blocks in the second buffer area.
And S145, re-rendering and storing the target block with the changed next display frame in the second buffer area.
Specifically, the specific implementation procedure in the steps S142-S145 is as follows: when the initial execution starts, the first buffer area and the second buffer area are emptied, the first buffer area is used as a current display frame buffer area, and the second buffer area is used as a next display frame buffer area. After the target blocks of the current display frame are determined and rendered, the rendering results of the target blocks of the current display frame are stored in a first buffer zone, the target blocks to be displayed of the next display frame and corresponding rendering information are determined, and for rendering unchanged target blocks, the rendering results of the unchanged target blocks can be directly multiplexed from the first buffer zone to a second buffer zone without re-rendering. And determining a new target block of the next display frame and a target block of the first buffer memory, which is changed in corresponding rendering information, re-rendering and updating the target block into the second buffer area.
Then, when the next frame display starts, taking the second buffer area as a new current display frame buffer area, taking the first buffer area as a new next display frame buffer area, and emptying the first buffer area; multiplexing a rendering target block which is unchanged between the current display frame and the new next display frame, and storing the rendering target block in the first buffer area; re-rendering and storing the target block with the changed next display frame to the first buffer area; and determining and storing rendering target block data of the first buffer area and the second buffer area alternately.
By the method, the target block of the previous frame is multiplexed by one buffer zone and contains the latest rendering result, and the other buffer zone is used for displaying the rendering result of the target block of the current display frame, so that the method can be well adapted to determining the display position of each target block on a screen according to the moving direction of the current virtual object, the rendered unchanged blocks are fully multiplexed, the processing amount of rendering data is greatly saved, unnecessary resource consumption is reduced, and the response speed is improved.
S15, refreshing rendering data of each target block in the display buffer.
Specifically, the method of the present application displays rendering result data of each target block in the first buffer area and the second buffer area by alternately refreshing, including: after the rendering result of the target block of each frame to be displayed is stored in the corresponding buffer area, the display can be refreshed by starting the called function, or a developer can control the refresh rate by setting vertical synchronization (VSync) to be matched with the refresh rate of the display. And refreshing the rendering target block data of the first buffer area and the second buffer area alternately read for display.
According to the scheme of the embodiment of the application, the scene information is adjusted and managed in the blocking manner through the dynamic LOD, the rendering parameters are dynamically determined by combining the motion parameters of the virtual target object and the relative position relation between each target block and the virtual target object, and the blocking resources are dynamically loaded and unloaded, so that the processing amount of rendered data can be reduced while the rendering quality is ensured, the consumption of resources is reduced, the rendering efficiency and speed are improved, and particularly when a large-scale three-dimensional scene is processed, the performance and the resource use can be effectively balanced, and the seamless user experience is provided.
In one embodiment, as shown in fig. 2, a second aspect of the present application provides a rendering processing apparatus, the apparatus comprising:
the building module is used for building a three-dimensional virtual scene model;
the partitioning module is used for partitioning the virtual scene into a plurality of blocks and marking each block;
The processing module is used for acquiring the position information and the speed vector information of the target virtual object in real time, and determining a target block and a corresponding rendering LOD value according to the position information and the speed vector information of the target virtual object;
The rendering loading module is used for rendering each target block according to each corresponding rendering LOD value and loading and storing the target blocks in the display buffer area;
And the display module is used for refreshing rendering data of each target block in the display buffer.
In one embodiment, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
constructing a three-dimensional virtual scene model;
dividing the virtual scene into a plurality of blocks, and marking each block;
Acquiring position information and speed vector information of a target virtual object in real time;
determining a target block and a corresponding rendering LOD value according to the target virtual object position information and the speed vector information;
rendering each target block according to each corresponding rendering LOD value, and loading and storing rendering results to a display buffer area;
and refreshing the rendering data of each target block in the display cache.
In one embodiment, the application proposes a computer device as shown in fig. 3, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
constructing a three-dimensional virtual scene model;
dividing the virtual scene into a plurality of blocks, and marking each block;
Acquiring position information and speed vector information of a target virtual object in real time;
determining a target block and a corresponding rendering LOD value according to the target virtual object position information and the speed vector information;
rendering each target block according to each corresponding rendering LOD value, and loading and storing rendering results to a display buffer area;
and refreshing the rendering data of each target block in the display cache.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (8)
1. An object rendering method, the method comprising:
constructing a three-dimensional virtual scene model;
dividing the virtual scene into a plurality of blocks, and marking each block;
Acquiring position information and speed vector information of a target virtual object in real time;
determining a target block and a corresponding rendering LOD value according to the target virtual object position information and the speed vector information;
rendering each target block according to each corresponding rendering LOD value, and loading and storing rendering results to a display buffer area;
refreshing rendering data of each target block in the display cache;
Specifically, the determining the target block and the corresponding rendering LOD value according to the target virtual object position information and the speed vector information includes:
obtaining vectors of current positions of target blocks relative to target virtual objects ;
Calculating a rendering LOD value corresponding to each target block, wherein the rendering LOD value corresponding to each target block satisfies the following conditions:
Wherein, For the LOD value of the i-th target block,For the maximum visual distance of the current device,For the vector of the current position of the i-th target block relative to the target virtual object,Is thatIs provided with a die for the mold,For the current velocity vector of the target virtual object, epsilon is the adjustment parameter,In order to make the lower-level rounding,A preset maximum LOD value, wherein lambda is a preset second threshold value;
Specifically, the loading and storing the rendering result in the display buffer area includes: establishing a first cache region and a second cache region; taking the first buffer area as a current display frame buffer area and the second buffer area as a next display frame buffer area; storing each rendered target block of the current display frame into the first buffer area; multiplexing rendering target blocks which are unchanged between the current display frame and the next display frame, and storing the rendering target blocks in the second buffer area; and re-rendering and storing the target block with the changed next display frame in the second buffer area.
2. The method of claim 1, wherein the determining the target block comprises:
obtaining a current visual field boundary according to the position of the virtual object and visual field parameters of the virtual camera;
and judging whether each block is intersected with the current view boundary, and if so, determining the block as a target block.
3. The method according to claim 2, wherein the method further comprises:
Judging whether the relative distance between each target block and the target virtual object is greater than a preset first threshold value or not;
And if the value is larger than the value, unloading LOD rendering resources of the corresponding block, and releasing the memory of the corresponding block.
4. The method of claim 1, the method further comprising:
Judging whether the difference of rendering LOD values between the boundaries of each target block exceeds a preset threshold value;
If yes, determining a boundary rendering LOD value in an interpolation mode:
Wherein, As a result of the interpolation of the boundary,AndThe low LOD value and the high LOD value corresponding to the boundary are respectively, and t is an interpolation parameter.
5. The method according to claim 1, wherein the method further comprises:
When the display of the next frame is started, the second buffer area is used as a new current display frame buffer area, and the first buffer area is used as a new next display frame buffer area;
The first buffer area is emptied, and the rendering target block which is unchanged between the current display frame and the new next display frame is multiplexed and stored in the first buffer area;
re-rendering and storing a new target block of a new next display frame and a changed target block into the first buffer area;
and alternately and circularly storing rendering target block data of the first buffer area and the second buffer area.
6. A rendering processing device for applying the method of any one of claims 1 to 5, the device comprising:
the building module is used for building a three-dimensional virtual scene model;
the partitioning module is used for partitioning the virtual scene into a plurality of blocks and marking each block;
The processing module is used for acquiring the position information and the speed vector information of the target virtual object in real time, and determining a target block and a corresponding rendering LOD value according to the position information and the speed vector information of the target virtual object;
The rendering loading module is used for rendering each target block according to each corresponding rendering LOD value and loading and storing the target blocks in the display buffer area;
And the display module is used for refreshing rendering data of each target block in the display buffer.
7. A computer readable storage medium, characterized in that a computer program is stored, which, when being executed by a processor, causes the processor to perform the steps of the method according to any of claims 1 to 5.
8. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410452673.4A CN118052923B (en) | 2024-04-16 | 2024-04-16 | Object rendering method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410452673.4A CN118052923B (en) | 2024-04-16 | 2024-04-16 | Object rendering method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118052923A CN118052923A (en) | 2024-05-17 |
CN118052923B true CN118052923B (en) | 2024-07-02 |
Family
ID=91048631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410452673.4A Active CN118052923B (en) | 2024-04-16 | 2024-04-16 | Object rendering method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118052923B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269858A (en) * | 2021-07-19 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Virtual scene rendering method and device, computer equipment and storage medium |
CN113398595A (en) * | 2021-06-30 | 2021-09-17 | 上海完美时空软件有限公司 | Scene resource updating method and device, storage medium and electronic device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019041351A1 (en) * | 2017-09-04 | 2019-03-07 | 艾迪普(北京)文化科技股份有限公司 | Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene |
CN114130022A (en) * | 2021-10-29 | 2022-03-04 | 腾讯科技(深圳)有限公司 | Method, apparatus, device, medium, and program product for displaying screen of virtual scene |
CN115501588A (en) * | 2022-09-14 | 2022-12-23 | 网易(杭州)网络有限公司 | Image rendering method and device, storage medium and electronic equipment |
CN116152422A (en) * | 2022-12-09 | 2023-05-23 | 网易(杭州)网络有限公司 | Illumination data processing method and device and electronic equipment |
CN117830487A (en) * | 2024-01-05 | 2024-04-05 | 咪咕文化科技有限公司 | Virtual object rendering method, device, equipment and medium |
-
2024
- 2024-04-16 CN CN202410452673.4A patent/CN118052923B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113398595A (en) * | 2021-06-30 | 2021-09-17 | 上海完美时空软件有限公司 | Scene resource updating method and device, storage medium and electronic device |
CN113269858A (en) * | 2021-07-19 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Virtual scene rendering method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118052923A (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11182952B2 (en) | Hidden culling in tile-based computer generated images | |
US8725466B2 (en) | System and method for hybrid solid and surface modeling for computer-aided design environments | |
WO2008064362A2 (en) | System and methods for fast simulation and visualization of sparse fluids | |
KR20150093689A (en) | Method for forming an optimized polygon based shell mesh | |
KR100809522B1 (en) | A efficient view-dependent lodlevel of detail lendering method of terrain | |
CN114241151A (en) | Three-dimensional model simplification method and device, computer equipment and computer storage medium | |
CN111494944A (en) | Terrain texture loading method and related device | |
CN109636889A (en) | A kind of Large Scale Terrain model rendering method based on dynamic suture zone | |
CN115228083A (en) | Resource rendering method and device | |
CN118052923B (en) | Object rendering method, device and storage medium | |
US7999806B2 (en) | Three-dimensional shape drawing device and three-dimensional shape drawing method | |
CN113426130A (en) | Batch processing method and device for models | |
CN108171784B (en) | Rendering method and terminal | |
CN107730577B (en) | Line-hooking rendering method, device, equipment and medium | |
CN106716500A (en) | Program, information processing device, depth definition method, and recording medium | |
CN113436300A (en) | Method and device for realizing explosion animation, electronic equipment and storage medium | |
CN106846489B (en) | A method of obj file is handled based on vtk | |
US10636210B2 (en) | Dynamic contour volume deformation | |
CN115588070B (en) | Three-dimensional image stylized migration method and terminal | |
JPH09265549A (en) | Image compositing system | |
CN113289334B (en) | Game scene display method and device | |
CN117808949B (en) | Scene rendering method | |
JP3711273B2 (en) | 3D graphics drawing device for occlusion culling | |
WO2016109768A1 (en) | System and method of shadow effect generation for concave objects with dynamic lighting in three-dimensional graphics | |
CN106657560B (en) | Image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |