CN114627221B - Scene rendering method and device, operator and readable storage medium - Google Patents

Scene rendering method and device, operator and readable storage medium Download PDF

Info

Publication number
CN114627221B
CN114627221B CN202111487651.4A CN202111487651A CN114627221B CN 114627221 B CN114627221 B CN 114627221B CN 202111487651 A CN202111487651 A CN 202111487651A CN 114627221 B CN114627221 B CN 114627221B
Authority
CN
China
Prior art keywords
dynamic
scene
rendering
determining
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111487651.4A
Other languages
Chinese (zh)
Other versions
CN114627221A (en
Inventor
谢成鸿
王亚伟
胡高
马裕凯
李嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lanya Box Technology Co ltd
Original Assignee
Beijing Lanya Box Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lanya Box Technology Co ltd filed Critical Beijing Lanya Box Technology Co ltd
Priority to CN202111487651.4A priority Critical patent/CN114627221B/en
Publication of CN114627221A publication Critical patent/CN114627221A/en
Application granted granted Critical
Publication of CN114627221B publication Critical patent/CN114627221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The embodiment of the application discloses a scene rendering method, which comprises the following steps: determining a dynamic grid region in the scene including a principal angle camera view cone range, the dynamic grid region being movable as the camera view cone range moves in the scene; determining a corresponding model file for rendering a model in the dynamic lattice subarea according to the position of the principal angle in the scene and the dynamic lattice subarea; and rendering the model in the dynamic grid region in the scene according to the model file. According to the technical scheme, the view vertebrae clipping can be optimized, the calculation pressure is reduced, and the rendering efficiency is improved.

Description

Scene rendering method and device, operator and readable storage medium
Technical Field
The present application relates to the field of scene rendering technologies, and in particular, to a scene rendering method and apparatus, a runner, and a readable storage medium.
Background
In graphics rendering of various games, maps, etc., rendering of large scenes is often faced. The number of the models in the large scene is generally large, and all the models in the scene are submitted to the GPU for rendering during rendering, so that the pressure of the display card is increased, the performance is reduced, and the method cannot be applied to rendering with high real-time requirements.
In this regard, one approach to scene rendering is to introduce a concept of "scene management", and only the camera view cone (see fig. 1, in 3D rendering, the human eye is compared with a camera, and the visual range of the camera forms a model within the range of a line-of-sight cone, i.e., the main angle camera view cone), and submit the model to the GPU for rendering. The method greatly reduces the number of the rendering objects in the scene, and is beneficial to improving the rendering performance. However, a new step in this approach is that it is necessary to determine the relationship between each model in the scene and the view vertebrae, i.e., traverse the bounding boxes of all models in the scene to determine whether it is inside or outside the view vertebrae range: if the model is in the video camera view cone range, adding the model into a rendering queue, and submitting the model to a display card for rendering; if outside, it is discarded in the current rendering. Thus, while this approach greatly reduces the number of objects rendered, this approach of determining the relationships between all models and cones within a scene is quite "rough". When the number of the models in the scene is large, the larger resource utilization pressure is obviously brought to the GPU.
Disclosure of Invention
The embodiment of the application provides a scene rendering method and device, a runner and a readable storage medium, which are used for solving or improving the problems in scene rendering in the prior art.
In one aspect, the scene rendering method provided by the embodiment of the application comprises the following steps:
determining a dynamic grid region in the scene including a principal angle camera view cone range, the dynamic grid region being movable as the camera view cone range moves in the scene;
determining a corresponding model file for rendering a model in the dynamic lattice subarea according to the position of the principal angle in the scene and the dynamic lattice subarea;
and rendering the model in the dynamic grid region in the scene according to the model file.
Preferably, the scene is divided into a plurality of static lattices, each static lattice is associated with a corresponding model file, the static lattice is the same as a dynamic lattice in size, and the corresponding model file for rendering the model in the dynamic lattice subarea is determined according to the position of a principal angle in the scene and the dynamic lattice area, and specifically comprises:
determining a static grid which is currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region;
and determining respective model files corresponding to the static grids.
Preferably, each gesture of the main angle camera corresponding to the horizontal angle of the preset first interval and/or the vertical angle of the preset second interval is pre-associated with a corresponding dynamic grid in a dynamic grid sub-area, and the static grid currently covered by the dynamic grid area in the scene is determined according to the position of the main angle in the scene and the dynamic grid area, which specifically comprises:
determining the current posture of the current main angle camera corresponding to the horizontal angle and/or the vertical angle according to the position of the main angle in the scene;
according to the current gesture and the corresponding dynamic lattices pre-associated with the gesture, determining the dynamic lattices in the video camera view cone range in the dynamic lattice subregion;
and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the video cone range of the camera.
Preferably, the method further comprises: setting a gesture key value for each gesture corresponding to a horizontal angle of a preset first interval and a vertical angle of a preset second interval and associating a corresponding dynamic grid with each gesture, and caching the gesture key value into an array; the determining the dynamic grids in the video camera view cone range in the dynamic grid subarea according to the current gesture and the corresponding dynamic grids in the gesture pre-association specifically comprises the following steps:
determining a gesture key value corresponding to the current gesture according to the current gesture;
and inquiring the array according to the gesture key value, and determining the dynamic grids in the video camera view cone range in the dynamic grid subarea.
Preferably, the method further comprises:
when a dynamic grid area including the view cone range of the main angle camera is determined in a scene, determining the rendering grade of each dynamic grid according to the distance between the view cone ray starting point of the main angle camera and each dynamic grid in the dynamic grid area, wherein the rendering fineness of rendering of different rendering grades is different; rendering the model in the dynamic grid region in the scene according to the model file specifically comprises the following steps:
and rendering the model in the dynamic grid region in the scene according to the model file and the rendering grade.
In another aspect, an embodiment of the present application further provides a scene rendering device, where the device includes: a lattice region determining unit, a model file determining unit, and a rendering unit, wherein:
the grid region determining unit is used for determining a dynamic grid region including the view cone range of the main angle camera in a scene, and the dynamic grid region can move along with the movement of the view cone range of the camera in the scene;
the model file determining unit is used for determining a corresponding model file for rendering a model in the dynamic grid region according to the position of the principal angle in the scene and the dynamic grid region;
and the rendering unit is used for rendering the model in the dynamic grid area in the scene according to the model file.
Preferably, the scene is divided into a plurality of static lattices, each static lattice is associated with a corresponding model file, the static lattices are the same as the dynamic lattices in size, and the model file determination unit comprises a static lattice determination subunit and a model file determination subunit, wherein:
the static grid determining subunit is used for determining a static grid which is currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region;
and the model file determining subunit is used for determining respective model files corresponding to the static lattices.
Preferably, the main angle camera views each posture corresponding to the horizontal angle of the preset first interval and/or the vertical angle of the preset second interval, and is pre-associated with a corresponding dynamic lattice in the dynamic lattice subarea,
the static grid determining subunit is used for determining the current posture of the current main angle camera view cone corresponding to the horizontal angle and/or the vertical angle according to the position of the main angle in the scene; and
according to the current gesture and the corresponding dynamic lattices pre-associated with the gesture, determining the dynamic lattices in the video camera view cone range in the dynamic lattice subregion; and
and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the video cone range of the camera.
In still another aspect, an embodiment of the present application provides an operator, including: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the method as described above.
In yet another aspect, a computer readable storage medium is provided in an embodiment of the present application, on which a computer program is stored, which when executed by a processor, implements the steps of the method as described above.
The above at least one technical scheme adopted by the embodiment of the application can achieve the following beneficial effects: the video clipping is greatly optimized, and only the dynamic magnetic field lattice area is traversed, and the bounding boxes in the area and the video clips of the video camera are detected, so that the video clips can judge which needs to be rendered: if the dynamic grid is out of the visual field range of the view vertebrae, all static scenes corresponding to the dynamic grid are not subjected to view vertebrae detection, and rendering is not performed; if the dynamic grids are within the visual field of the view vertebrae, the view vertebrae detection of the grids in the static scene can be determined according to the requirement.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a view of a principal angle camera during rendering;
FIG. 2 is a schematic diagram of the relationship of camera view vertebrae to a model in a scene;
FIG. 3 is a schematic flow chart of a rendering method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a "dynamic magnetic field grid" according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a dynamic magnetic field grid and camera view cone relationship according to an embodiment of the present application;
FIG. 6 is a schematic diagram of rendering levels in an embodiment of the application;
FIG. 7 is a schematic diagram of an example structure of a rendering device according to the present application;
fig. 8 is a schematic structural view of an embodiment of the present application.
Detailed Description
In application scenes such as game pictures, the number of objects in the pictures is often large, and the objects can be better presented in the pictures after being rendered. To achieve rendering, a model in the scene needs to be identified. As described in the foregoing background art, in order to reduce the number of rendering objects, only model objects in the view-cone range may be rendered, but this way, the judgment of the relationship between the view-cone and the model objects is increased, the area of the "view-cone" may not be large, but the number of models in the scene may be large, and for a computer, it may be detected whether a certain model is in the view-cone range only by "calculation", so that it is still necessary to compare all the models in the scene with the view-cone, and thus the resource utilization pressure of the GPU will also be caused due to the large number of models.
In order to better explain the problems existing in the prior art, so as to clearly present the technical scheme of the embodiment of the application, the following detailed description is given with reference to the accompanying drawings. Referring to fig. 2, there is shown the area of the camera view cone, and the model in the scene, where various regular or irregular objects, called models, are present at the periphery of the model, based on the computer's specificity, a "bounding box" covering the bounding model, which presents the outline of the model, which may appear as spheres, cylinders, capsules or other shapes, as the case may be. The area where the visual vertebrae are located is the intersecting plane of the visual vertebrae on the scene plane, and three relations are formed in the area of the intersecting plane: firstly, the bounding box where the model is located is completely in the plane; and secondly, the bounding box where the model is located is completely out of the plane, and thirdly, the model spans the plane of the visual vertebrae. The first and third types are incorporated into the rendering sequence, and the second type is not necessarily rendered. Therefore, judging the relation between one view plane and the model in the scene is the key one in solving the problem that the rendering quantity is large. As described above, in the prior art, when an object to be rendered is loaded, all models are traversed, so that the object to be rendered is "screened" for rendering. However, this approach still consumes significant computer resources, particularly those models that are very far from the plane of the optic cone, the necessity of which is suspected.
The dilemma faced by the prior art is the "forward direction" of the present application. Referring to fig. 3 and 4, fig. 3 shows a flow of a technical solution according to an embodiment of the present application, and fig. 4 shows a concept "dynamic magnetic field grid" that is important for solving the technical solution according to the present application. In order to achieve fine management, in the embodiment of the present application, a scene is divided into a plurality of small "lattices" with equal or unequal sizes, and a bounding box where each model is located may occupy one or more lattices. Of course, the "grid" herein may be an actual "grid line", or may be a virtual "grid line" characterized only by grid data. The scene serves "actions". In one scenario, there must be some "activity" and these activities have a "principal angle". The principal angle has multiple expression forms, one possibility is that the principal angle is a 'real' character in a game, the sight of the character at the principal angle is the sight of the principal angle, the attention of a player can follow with the movement of the character, and the sight of the principal angle of the character at the game is coincident with the sight of a player; another possibility is to observe the principal angle of the line of sight formed by the model in the scene with the view angle of the operator operating the computer, without a real principal angle character, for example, there is always a viewing view angle of the game in the scene, and the direction of this view angle is the direction of the principal angle line of sight.
After determining the principal angle line of sight (camera view), step S301 is performed: a lattice region of a predetermined size is constructed in the scene centered on the principal angular position in the camera view cone. From a two-dimensional perspective, the extent of the grid area is generally preferably equal to or greater than the area of the current camera view on the scene plane; from a three-dimensional perspective, the grid region preferably can comprise the furthest distance the camera view vertebrae extend in the scene. Thus, the size of the grid region can be dynamically adjusted, for example, the grid region may be formed of 64×64 pixels in the current rendering operation, and the grid region may be formed of 32×32 pixels in the next rendering operation. The "grid area" herein is different from a general static grid (static grid) which only divides the entire scene, and its main characteristics include that it has dynamics, which is always centered on a principal angle, in which direction the principal angle moves, and the grid area also moves, forming a "potential force range" of the principal angle, which, like a magnet, is tightly attached to the principal angle and may be called a "dynamic magnetic field grid". The term "dynamic" in this sense indicates the difference from a static grid, which is a fixed grid divided in one scene (the next scene may also change), which generally does not change with the change in the principal angle of the activity in that scene during one rendering activity, whereas a dynamic grid follows the principal angle as a shadow, which moves. The term "magnetic field" in this designation indicates that its dependence on the principal angle, like two magnets, does not separate in current scene rendering. However, the dynamic magnetic field grid is moved with the principal angle, and does not mean that the principal angle is exactly the same at every position in the scene, and the dynamic magnetic field grid (grid region) may be the same or different in practice. If the models are more and compact in the scene, the lattice area can be smaller, and conversely, the lattice area can be larger. Of course, each main angle has one position, and the dynamic magnetic field grids are the same, which is beneficial to information processing of a computer.
After determining the dynamic magnetic field lattice of the principal angle, when model rendering is required, step S302 may be implemented: according to the position of the principal angle in the scene and the dynamic lattice area, determining a corresponding scene land block file (model file) of the model covered by the dynamic magnetic field lattice, step S303: and loading the land block files to render the model in the dynamic grid region in the scene. If the scene block file is in the moving process, the scene block file at the original position of the main angle can be unloaded at the same time. The main angle continuously moves, and the dynamic loading and unloading process is continuously carried out, so that seamless loading and unloading of an oversized scene can be realized. To illustrate this process, reference is continued to the legend. As shown in fig. 5, the figure shows a case where the dynamic magnetic field lattice occupies a static lattice divided in advance in the scene. In the figure, a lattice division (19×19 pixels) is performed on an oversized scene, and in this lattice-divided scene, the current principal angle is moved to a (7, 6) pixel lattice, and a lattice region of 3×3 pixels is formed around the principal angle, and the lattice region includes 9 static lattices in total, D1, D2, D8, and D9. If the main angle moves to another place, the lattice area around the main angle with its center will move along with it. In this grid region, the grids where the camera views the vertebrae are mainly D1, D2, D4, D5. Then, during rendering, first, the lattices outside the rendering can be excluded, and the lattices are a large part of the regions outside the lattice regions, and the models of the lattices in the excluded regions are not in the view range and can be not concerned, as they do not influence the present main angle view to the presentation of the models, which is equivalent to not being seen at all. Second, even the lattices within the lattice area range of the dynamic magnetic field lattice do not all need to be rendered, but only four lattices (D1, D2, D4, D5) located within the view cone range are necessarily rendered. Therefore, the embodiment of the application performs global range traversal on 19×19 pixel points of a scene to dynamically detect whether a model to be rendered is in the 'huge project' of the view cone range, reduces the range of 3×3 pixel points in the grid area range, and finally determines to render only the models in 4 grids, thereby greatly reducing the number of rendering models and obviously improving the rendering efficiency.
As described above, the embodiment of the application greatly optimizes the video clipping, and only needs to traverse the dynamic magnetic field lattice region and detect the bounding box and the video camera video in the region, so as to judge which needs to be rendered: if the dynamic grid is out of the visual field range of the view vertebrae, all static scenes corresponding to the dynamic grid are not subjected to view vertebrae detection, and rendering is not performed; if the dynamic grids are within the visual field of the view vertebrae, the view vertebrae detection of the grids in the static scene can be determined according to the requirement. According to the requirement, depending on the pursuit of the actual efficiency, if the visual vertebrae detection is carried out, the rendering quantity can be greatly reduced, and the work can be carried out, especially for the case of a smaller quantity of models in the visual vertebrae range; if the visual vertebrae detection is carried out, the contribution to the reduction of the rendering quantity is not great, the necessity of carrying out the work is not strong, and the rendering of the grid area range can be directly carried out.
The above solution has fully expressed the core content of the application, but on this basis further optimisation is entirely feasible in order to increase the efficiency even more significantly. In the above-mentioned embodiments, it is considered that in practical applications, the size of the dynamic magnetic field grid is fixed and the position relative to the principal angle is also fixed, for example, as described above, the dynamic magnetic field grid of 3×3 pixels is constructed centering on the position of the principal angle, and the magnetic field grid region is unchanged relative to the principal angle no matter where the principal angle moves. However, even though the "position" of the main angle with respect to the field grid region may be unchanged, the camera view of the main angle within the field grid region may change, whereby the rendering grid per pass may be different, although limited to the "dynamic field grid" range. Then, in order to further optimize the efficiency, based on the relation between the dynamic magnetic field grids and the camera view cone being determined at the time of initialization, a KEY value may be set every interval (for example, every 10 ° in interval) according to the situations of the horizontal angle (0-360 °) and the vertical angle (0-180 °) of the camera view cone, and the dynamic magnetic field grids in the camera view cone range corresponding to each KEY value are all cached into an array. When the camera is operated to change the view angle of the main angle, the current camera view position is determined, the corresponding KEY value is determined, the covered dynamic magnetic field grids are searched from the prestored array according to the KEY value, and if the dynamic magnetic field grids are cached, the model in the stored dynamic grids can be directly rendered; and if not, determining the grid covered by the current cone through calculation, and further rendering the model in the grid. Because each state of the video camera view vertebrae can be cached according to a preset interval, the corresponding array can be basically searched, the grids (the grid internal model) to be rendered can be quickly found, and the rendering efficiency is greatly improved.
In the foregoing optimization direction, the models in the grid regions are not distinguished. In fact, during practical application, some models must be far from the starting point of the rays of the cone and some models must be close to the starting point of the rays of the cone, relative to the cone of the camera. Admittedly, the models of different distances, the "images" in the principal angular line of sight are different, since the images are different, the differences in rendering are also possible. This is a new direction of optimization, i.e. determining the rendering level based on how far or how far the model is from the camera view vertebrae. Referring to fig. 6, three identical sheep were closer, naturally seen more clearly, farther apart, and clearly obscured. For the near one, careful rendering should be used for higher definition, and for the far one, only a large outline needs to be rendered in place. For example, in the figure, when the starting point of the video camera view cone ray is 5 meters away from the sheep, rendering the sheep with the model fineness at the leftmost side; when the distance is 30 meters, rendering sheep with the middle model fineness; and when the distance is 100 meters, rendering the sheep with the model fineness at the rightmost side. Thus, when generating the dynamic magnetic field grid, the rendering grade (for example, determining the rendering grade according to the bounding box size) of the dynamic magnetic field grid (which actually characterizes the model in the grid) can be calculated at one time according to the distance between the model and the camera view ray starting point, and each model in the grid subarea is subjected to layering processing when being loaded: when the grid rendering level in the dynamic magnetic field grid is 1, LOD with the bounding box volume of XXXm3 or more is multiplied by XX level; when the lattice rendering level in the "dynamic magnetic field lattice" is 2, LOD, which is several times larger than xxm3 in the bounding box volume, is XX level. In this way, the operation of calculating the rendering level of the dynamic magnetic field grid at one time according to the camera view distance is completed when the dynamic magnetic field grid is determined, instead of dynamically calculating the rendering level according to the camera view distance and the bounding box of the model for each frame of the model entering the view range, the rendering level is not changed in the rendering state, the resource loss is greatly reduced, the CPU is not required to dynamically calculate during operation, and the rendering pressure of the GPU is reduced.
The foregoing details various embodiments of the scene rendering method provided by the present application. The method described above can be virtualized as a scene rendering device, in the same way as described above. Referring to fig. 7, which shows one embodiment of a scene rendering device, the device comprises: a lattice region determination unit U71, a model file determination unit U72, and a rendering unit U73, wherein:
a lattice region determining unit U71 for determining a dynamic lattice region including a main angle camera view cone range in a scene, the dynamic lattice region being movable as the camera view cone range moves in the scene;
a model file determining unit U72, configured to determine a corresponding model file for rendering a model in the dynamic lattice region according to the position of the principal angle in the scene and the dynamic lattice region;
and the rendering unit U73 is used for rendering the model in the dynamic grid area in the scene according to the model file.
The embodiment of the device can achieve the same technical effects as the method described above, and for avoiding repetition, the description is omitted here. The device embodiment can be further optimized in various ways. Such as model file determination unit U72, whose function may vary in composition and structure in different implementations. Of course, before optimizing it, if the need for optimization is met, some conditions should also be crossed so that optimization is achieved more reliably. One possible way is that the model file determining unit relies on static grid implementation, and then the scene should be divided into a number of static grids, each of which is associated with a corresponding model file, and the static grids are the same size as the dynamic grids, so that the model file determining unit U72 may comprise the following sub-units: a static lattice determination subunit U721 and a model file determination subunit U722, wherein:
a static grid determination subunit U721 configured to determine a static grid that is currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region;
and a model file determining subunit U722, configured to determine respective model files corresponding to the static lattices.
In addition, the device embodiment described above may be further optimized. Such as refining the static lattice determination subunit. In the case that the main angle camera views each posture corresponding to the vertebra at the horizontal angle of the predetermined first interval and/or the vertical angle of the predetermined second interval and the corresponding dynamic grids in the dynamic grid region are associated in advance, functionally, the static grid determining subunit may be configured to determine the current posture of the current main angle camera viewing the vertebra corresponding to the horizontal angle and/or the vertical angle according to the position of the main angle in the scene; according to the current gesture and the corresponding dynamic lattices in the gesture pre-association mode, determining the dynamic lattices in the video camera view cone range in the dynamic lattice subarea; and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the view cone range of the camera. This functional requirement corresponds to a corresponding structure, which can be realized by the necessary components.
In addition to the apparatus of the above subject matter, embodiments of the present application also provide an operator. Referring to fig. 8, a schematic diagram of an embodiment of a runtime device is shown, where the runtime device 80 includes a memory 81, a processor 82, and a computer program stored on the memory 81 and executable on the processor 82, and when executed by the processor 82, the computer program implements the steps of the blockchain-based data processing method described above. Similarly, embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the above method.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A method of scene rendering, the method comprising:
determining a dynamic grid region in the scene plane containing the range of the principal angle camera view, the dynamic grid region being centered at the principal angle and being movable as the camera view moves in the scene plane, the dynamic grid region being greater than the region of the principal angle camera view on the scene plane or containing the furthest distance the camera view extends in the scene plane; the area where the visual vertebrae are located is the intersecting plane of the visual vertebrae on the scene plane; the scene is divided into a plurality of static lattices, each static lattice is associated with a corresponding model file, and the static lattices have the same size as the dynamic lattices; the main angle camera looks at each corresponding gesture of the vertebra under the horizontal angle of a preset first interval and/or the vertical angle of a preset second interval, and corresponding dynamic grids in the dynamic grid area are associated in advance;
determining a corresponding model file for rendering a model in the dynamic lattice subarea according to the position of the principal angle in the scene plane and the dynamic lattice subarea;
and rendering the model in the dynamic grid area in the scene plane according to the model file.
2. The method according to claim 1, wherein the determining a corresponding model file for rendering the model within the dynamic lattice sub-area according to the position of the principal angle in the scene and the dynamic lattice area, in particular comprises:
determining a static grid which is currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region;
and determining respective model files corresponding to the static grids.
3. The method according to claim 2, wherein the determining the static grid currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region comprises:
determining the current posture of the current main angle camera corresponding to the horizontal angle and/or the vertical angle according to the position of the main angle in the scene;
according to the current gesture and the corresponding dynamic lattices pre-associated with the gesture, determining the dynamic lattices in the video camera view cone range in the dynamic lattice subregion;
and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the video cone range of the camera.
4. A method according to claim 3, characterized in that the method further comprises: setting a gesture key value for each gesture corresponding to a horizontal angle of a preset first interval and a vertical angle of a preset second interval and associating a corresponding dynamic grid with each gesture, and caching the gesture key value into an array; the determining the dynamic grids in the video camera view cone range in the dynamic grid subarea according to the current gesture and the corresponding dynamic grids in the gesture pre-association specifically comprises the following steps:
determining a gesture key value corresponding to the current gesture according to the current gesture;
and inquiring the array according to the gesture key value, and determining the dynamic grids in the video camera view cone range in the dynamic grid subarea.
5. The method according to claim 1, wherein the method further comprises:
when a dynamic grid area including the view cone range of the main angle camera is determined in a scene, determining the rendering grade of each dynamic grid according to the distance between the view cone ray starting point of the main angle camera and each dynamic grid in the dynamic grid area, wherein the rendering fineness of rendering of different rendering grades is different; rendering the model in the dynamic grid region in the scene according to the model file specifically comprises the following steps:
and rendering the model in the dynamic grid region in the scene according to the model file and the rendering grade.
6. A scene rendering device, the device comprising: a lattice region determining unit, a model file determining unit, and a rendering unit, wherein:
the grid region determining unit is used for determining a dynamic grid region including the view cone range of the main angle camera in a scene plane, wherein the dynamic grid region is centered on the main angle and can move along with the movement of the view cone range of the camera in the scene, and the dynamic grid region is larger than the region of the view cone of the main angle camera on the scene plane or the furthest distance extending in the scene plane including the view cone of the camera; the area where the visual vertebrae are located is the intersecting plane of the visual vertebrae on the scene plane; the scene is divided into a plurality of static lattices, each static lattice is associated with a corresponding model file, and the static lattices have the same size as the dynamic lattices; the main angle camera looks at each corresponding gesture of the vertebra under the horizontal angle of a preset first interval and/or the vertical angle of a preset second interval, and corresponding dynamic grids in the dynamic grid area are associated in advance;
the model file determining unit is used for determining a corresponding model file for rendering a model in the dynamic grid region according to the position of the main angle in the scene plane and the dynamic grid region;
and the rendering unit is used for rendering the model in the dynamic grid area in the scene plane according to the model file.
7. The apparatus of claim 6, wherein the model file determination unit comprises a static lattice determination subunit and a model file determination subunit, wherein:
the static grid determining subunit is used for determining a static grid which is currently covered by the dynamic grid region in the scene according to the position of the principal angle in the scene and the dynamic grid region;
and the model file determining subunit is used for determining respective model files corresponding to the static lattices.
8. The apparatus of claim 7, wherein the static lattice determination subunit is configured to determine a current pose of the current principal angle camera corresponding to the horizontal angle and/or the vertical angle according to a position of the principal angle in the scene; and
according to the current gesture and the corresponding dynamic lattices pre-associated with the gesture, determining the dynamic lattices in the video camera view cone range in the dynamic lattice subregion; and
and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the video cone range of the camera.
9. An operator, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 5.
CN202111487651.4A 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium Active CN114627221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111487651.4A CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111487651.4A CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Publications (2)

Publication Number Publication Date
CN114627221A CN114627221A (en) 2022-06-14
CN114627221B true CN114627221B (en) 2023-11-10

Family

ID=81898904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111487651.4A Active CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Country Status (1)

Country Link
CN (1) CN114627221B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130015320A (en) * 2011-08-03 2013-02-14 (주)마이어스게임즈 Gaem engine, game authoring tool, game processing method, record media of the same
US9519986B1 (en) * 2013-06-20 2016-12-13 Pixar Using stand-in camera to determine grid for rendering an image from a virtual camera
CN106569816A (en) * 2016-10-26 2017-04-19 搜游网络科技(北京)有限公司 Rendering method and apparatus
CN107705364A (en) * 2016-08-08 2018-02-16 国网新疆电力公司 A kind of immersion virtual display system based on three-dimensional geographic information
CN109011559A (en) * 2018-07-26 2018-12-18 网易(杭州)网络有限公司 Control method, device, equipment and the storage medium of virtual carrier in game
JP6513241B1 (en) * 2018-01-30 2019-05-15 株式会社コロプラ PROGRAM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
CN111127615A (en) * 2019-12-26 2020-05-08 四川航天神坤科技有限公司 Data scheduling method and device of three-dimensional model and electronic equipment
CN111354067A (en) * 2020-03-02 2020-06-30 成都偶邦智能科技有限公司 Multi-model same-screen rendering method based on Unity3D engine
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
CN112396682A (en) * 2020-11-17 2021-02-23 重庆市地理信息和遥感应用中心 Visual progressive model browsing method in three-dimensional scene
CN112691381A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Rendering method, device and equipment of virtual scene and computer readable storage medium
CN113457137A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium
CN113570691A (en) * 2020-04-27 2021-10-29 北京蓝亚盒子科技有限公司 Storage optimization method and device for voxel model and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7872648B2 (en) * 2007-06-14 2011-01-18 Microsoft Corporation Random-access vector graphics
US11011140B2 (en) * 2016-11-14 2021-05-18 Huawei Technologies Co., Ltd. Image rendering method and apparatus, and VR device
US10863154B2 (en) * 2018-02-20 2020-12-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN109260708B (en) * 2018-08-24 2020-01-10 腾讯科技(深圳)有限公司 Map rendering method and device and computer equipment
KR102582407B1 (en) * 2019-07-28 2023-09-26 구글 엘엘씨 Methods, systems, and media for rendering immersive video content with foveated meshes

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130015320A (en) * 2011-08-03 2013-02-14 (주)마이어스게임즈 Gaem engine, game authoring tool, game processing method, record media of the same
US9519986B1 (en) * 2013-06-20 2016-12-13 Pixar Using stand-in camera to determine grid for rendering an image from a virtual camera
CN107705364A (en) * 2016-08-08 2018-02-16 国网新疆电力公司 A kind of immersion virtual display system based on three-dimensional geographic information
CN106569816A (en) * 2016-10-26 2017-04-19 搜游网络科技(北京)有限公司 Rendering method and apparatus
JP6513241B1 (en) * 2018-01-30 2019-05-15 株式会社コロプラ PROGRAM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
CN109011559A (en) * 2018-07-26 2018-12-18 网易(杭州)网络有限公司 Control method, device, equipment and the storage medium of virtual carrier in game
CN111127615A (en) * 2019-12-26 2020-05-08 四川航天神坤科技有限公司 Data scheduling method and device of three-dimensional model and electronic equipment
CN111354067A (en) * 2020-03-02 2020-06-30 成都偶邦智能科技有限公司 Multi-model same-screen rendering method based on Unity3D engine
CN113570691A (en) * 2020-04-27 2021-10-29 北京蓝亚盒子科技有限公司 Storage optimization method and device for voxel model and electronic equipment
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
CN112396682A (en) * 2020-11-17 2021-02-23 重庆市地理信息和遥感应用中心 Visual progressive model browsing method in three-dimensional scene
CN112691381A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Rendering method, device and equipment of virtual scene and computer readable storage medium
CN113457137A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
CN114627221A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US8970580B2 (en) Method, apparatus and computer-readable medium rendering three-dimensional (3D) graphics
US9013479B2 (en) Apparatus and method for tile-based rendering
US8810585B2 (en) Method and apparatus for processing vertex
WO2008067483A1 (en) Ray tracing a three dimensional scene using a grid
CN105096375B (en) Image processing method and apparatus
US20170221255A1 (en) Image processing apparatus and method
US20130342528A1 (en) Apparatus and method for traversing hierarchical acceleration structure
US11631212B2 (en) Methods and apparatus for efficient multi-view rasterization
US20160005191A1 (en) Mipmap generation method and apparatus
US10762594B2 (en) Optimized memory access for reconstructing a three dimensional shape of an object by visual hull
US9858709B2 (en) Apparatus and method for processing primitive in three-dimensional (3D) graphics rendering system
CN114627221B (en) Scene rendering method and device, operator and readable storage medium
US11908079B2 (en) Variable rate tessellation
CN114596195A (en) Topographic data processing method, system, device and computer storage medium
GB2605360A (en) Method, Apparatus and Storage Medium for Realizing Geometric Viewing Frustum of OCC Tree in Smart City
JP2019121237A (en) Program, image processing method and image processing device
CN114418829A (en) Static scene occlusion processing method and device, electronic equipment and readable medium
KR102306774B1 (en) Method and apparatus for processing image
CN116188552B (en) Region-based depth test method, device, equipment and storage medium
US11380047B2 (en) Methods and apparatus for order-independent occlusion computations
US20230298261A1 (en) Distributed visibility stream generation for coarse grain binning
CN117076405A (en) Dynamic display method of geographic information data and related equipment
KR20230028373A (en) Control segmentation playback in binning hardware
CN117453095A (en) Three-dimensional object selection method, device, medium and equipment
CN116778069A (en) Stereoscopic graphics rendering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant