CN114627221A - Scene rendering method and device, runner and readable storage medium - Google Patents

Scene rendering method and device, runner and readable storage medium Download PDF

Info

Publication number
CN114627221A
CN114627221A CN202111487651.4A CN202111487651A CN114627221A CN 114627221 A CN114627221 A CN 114627221A CN 202111487651 A CN202111487651 A CN 202111487651A CN 114627221 A CN114627221 A CN 114627221A
Authority
CN
China
Prior art keywords
dynamic
scene
determining
rendering
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111487651.4A
Other languages
Chinese (zh)
Other versions
CN114627221B (en
Inventor
谢成鸿
王亚伟
胡高
马裕凯
李嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lanya Box Technology Co ltd
Original Assignee
Beijing Lanya Box Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lanya Box Technology Co ltd filed Critical Beijing Lanya Box Technology Co ltd
Priority to CN202111487651.4A priority Critical patent/CN114627221B/en
Publication of CN114627221A publication Critical patent/CN114627221A/en
Application granted granted Critical
Publication of CN114627221B publication Critical patent/CN114627221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a scene rendering method, which comprises the following steps: determining a dynamic grid area containing a main angle camera view cone range in a scene, wherein the dynamic grid area can move along with the movement of the camera view cone range in the scene; determining a corresponding model file for rendering a model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area; and rendering the model in the dynamic grid region in the scene according to the model file. By the technical scheme, visual vertebra cutting can be optimized, the calculation pressure is reduced, and the rendering efficiency is improved.

Description

Scene rendering method and device, runner and readable storage medium
Technical Field
The present application relates to the field of scene rendering technologies, and in particular, to a scene rendering method, a scene rendering device, an operator, and a readable storage medium.
Background
In various games, maps and other graphic renderings, large scenes are often faced with rendering. The number of models in a large scene is generally large, and all models in the scene are submitted to a GPU for rendering during rendering, so that the pressure of a display card is increased, the performance is reduced, and the method cannot be applied to rendering with high real-time requirements.
In contrast, a scene rendering method introduces a concept of "scene management", in which only a model in a range of a camera view cone (see fig. 1, in 3D rendering, a human eye is compared with a camera, and a view range of the camera forms a view cone, i.e., a main angle camera view cone) is submitted to a GPU for rendering. The method greatly reduces the number of rendering objects in the scene and is beneficial to improving the rendering performance. However, a newly added step of this method is that the relationship between each model in the scene and the view frustum must be judged, i.e. the bounding box of all models in the scene is traversed to determine whether it is inside or outside the view frustum range: if the model is within the visual cone range of the camera, adding the model into a rendering queue, and submitting a display card for rendering; if external, it is discarded in the current rendering. Thus, while this approach greatly reduces the number of rendered objects, this approach to determining the relationship between the entire model and the view frustum within a scene is rather "brute-force". When the number of models in a scene is large, great resource utilization pressure is brought to the GPU.
Disclosure of Invention
Embodiments of the present application provide a scene rendering method and apparatus, a runner, and a readable storage medium, which are used to solve or improve problems in scene rendering in the prior art.
On one hand, the scene rendering method provided by the embodiment of the application comprises the following steps:
determining a dynamic grid area containing a main angle camera view cone range in a scene, wherein the dynamic grid area can move along with the movement of the camera view cone range in the scene;
determining a corresponding model file for rendering a model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area;
and rendering the model in the dynamic lattice region in the scene according to the model file.
Preferably, the scene is divided into a plurality of static grids, each static grid is associated with a corresponding model file, the static grids are the same as the dynamic grids in size, and the determining of the corresponding model file for rendering the model in the dynamic grid area according to the position of the pivot in the scene and the dynamic grid area specifically includes:
determining static grids covered by the dynamic grid area in the scene currently according to the position of the main character in the scene and the dynamic grid area;
and determining respective model files corresponding to the static grids.
Preferably, each corresponding posture of the view cone of the main angle camera under a horizontal angle of a predetermined first interval and/or a vertical angle of a predetermined second interval is associated with a corresponding dynamic grid in a dynamic grid area in advance, and the determining, according to the position of the main angle in the scene and the dynamic grid area, a static grid currently covered by the dynamic grid area in the scene specifically includes:
determining the current posture of the visual cone of the current main angle camera at the horizontal angle and/or the vertical angle according to the position of the main angle in the scene;
according to the current posture and the posture, pre-associating the corresponding dynamic grids, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area;
and determining static grids covered by the dynamic grids in the scene according to the dynamic grids in the visual cone range of the camera.
Preferably, the method further comprises: setting a posture key value for each corresponding posture under a preset first interval horizontal angle and a preset second interval vertical angle and each posture related to a corresponding dynamic grid, and caching the posture key value into an array; the pre-associating the corresponding dynamic grids according to the current posture and the posture to determine the dynamic grids in the camera visual cone range in the dynamic grid area specifically comprises the following steps:
determining a posture key value corresponding to the current posture according to the current posture;
and querying the array according to the posture key value, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area.
Preferably, the method further comprises:
when a dynamic grid area including a main angle camera visual cone range is determined in a scene, determining the rendering level of each dynamic grid according to the distance between the starting point of the main angle camera visual cone ray and each dynamic grid in the dynamic grid area, wherein the rendering fineness degrees of different rendering levels are different; the rendering the model in the dynamic lattice region in the scene according to the model file specifically includes:
and rendering the model in the dynamic grid region in the scene according to the model file and the rendering grade.
On the other hand, an embodiment of the present application further provides a scene rendering apparatus, where the apparatus includes: a grid area determination unit, a model file determination unit and a rendering unit, wherein:
the grid area determining unit is used for determining a dynamic grid area containing a main corner camera view vertebra range in a scene, and the dynamic grid area can move along with the movement of the camera view vertebra range in the scene;
the model file determining unit is used for determining a corresponding model file for rendering the model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area;
and the rendering unit is used for rendering the model in the dynamic grid region in the scene according to the model file.
Preferably, the scene is divided into a plurality of static grids, each static grid is associated with a corresponding model file, the static grids are the same as the dynamic grids in size, and the model file determination unit includes a static grid determination subunit and a model file determination subunit, wherein:
the static grid determining subunit is configured to determine, according to the position of the main character in the scene and the dynamic grid area, a static grid currently covered by the dynamic grid area in the scene;
and the model file determining subunit is used for determining respective model files corresponding to the static lattices.
Preferably, each corresponding posture of the view vertebra of the main angle camera under the horizontal angle of a preset first interval and/or the vertical angle of a preset second interval is associated with the corresponding dynamic grids in the dynamic grid area in advance,
the static grid determining subunit is used for determining the current posture of the visual cone of the current main-angle camera at the horizontal angle and/or the vertical angle according to the position of the main angle in the scene; and
according to the current posture and the posture, pre-associating the corresponding dynamic grids, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area; and
and determining static grids covered by the dynamic grids in the scene according to the dynamic grids in the visual cone range of the camera.
In another aspect, an embodiment of the present application provides an operator, where the operator includes: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method as described above.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method as described above.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the visual vertebra is cut out and is realized optimizing greatly, as long as traverse dynamic magnetic field check region, look the vertebra with bounding box and camera in this region and do the detection, can judge which needs to render: if the dynamic grid is out of the visual field range of the visual vertebra, all static scenes corresponding to the dynamic grid are not subjected to visual vertebra detection, and then rendering is not performed; if the dynamic grid is within the visual field range of the visual cone, the visual cone detection of the grid in the static scene can be determined according to requirements.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic view of a principal angle camera view during rendering;
FIG. 2 is a schematic diagram of the relationship between the camera view cone and the model in the scene;
fig. 3 is a schematic flowchart of a rendering method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a "dynamic magnetic field grid" referred to in the embodiments of the present application;
FIG. 5 is a schematic view of the relationship between the dynamic magnetic field grid and the camera view cone according to the embodiment of the present application;
FIG. 6 is a schematic diagram of rendering levels in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an example rendering apparatus provided in the present application;
fig. 8 is a schematic structural diagram of an embodiment of the runner of the present application.
Detailed Description
In application scenes such as game pictures, the number of objects in the pictures is often large, and the objects can be better presented in the pictures after being rendered. To achieve rendering, models in the scene need to be identified. As described in the foregoing background, to reduce the number of rendering objects, only model objects in the view cone range may be rendered, but this approach increases the judgment of the relationship between the view cone and the model objects, the block of the "view cone" may not be large, but the number of models in the scene may be large, and for a computer, it can only be detected by "calculation" whether a certain model is in the view cone range, so that all models in the scene still need to be compared with the view cone, and thus the resource utilization pressure of the GPU will be caused due to the large number of models.
In order to better explain the problems existing in the prior art and clearly present the technical solutions of the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings. Referring to fig. 2, there is shown the region of the camera view cone and the model in the scene, where various regular or irregular objects, called models, exist at the periphery of the model, based on the specificity of the computer, there is a "bounding box" covering the model, which represents the outer contour of the model and may be represented as a sphere, cylinder, capsule or other shape as the case may be. The region where the visual vertebra is located is an intersection plane of visual line cones of the visual vertebra on a scene plane, and three relations are formed in the region of the intersection plane: one is that the bounding box in which the model is located is completely within the plane; secondly, the bounding box of the model is completely outside the plane, and thirdly, the model spans the visual vertebra plane. Both the first and third types are included in the rendering sequence, and the second type is not necessary. Therefore, the judgment of the relationship between one visual cone plane and the model in the scene is a 'key move' for solving the problem of large rendering quantity. As mentioned above, in the prior art, when loading an object to be rendered, rendering is performed by traversing all models, thereby "screening" out the object to be rendered. However, the necessity of this approach remains questionable for computer resource-intensive models, especially those models that are very far from the visual vertebral plane.
The dilemma faced by the prior art is the "forward direction" of the solution of the present application. Referring to fig. 3 and 4, fig. 3 shows a flow of the technical solution of the embodiment of the present application, and fig. 4 shows a concept "dynamic magnetic field lattice" important to solve the technical solution of the present application. To achieve refined management, in the embodiment of the present application, a scene is divided into several small "lattices" with equal or unequal sizes, and a bounding box in which each model is located may occupy one or more lattices. Of course, the "lattice" herein may be an actually existing "lattice line" or may be a virtual "lattice line" represented by only lattice data. The scenario serves an "action". In one scenario, there must be some "activity" and there is a "principal" of these activities. The hero has multiple manifestations, one may be that it is a "real" character in the game, the line of sight of the hero character is the hero line of sight, the player's attention will follow the focus as the character moves, in which case the player character hero line of sight coincides with the player's line of sight; another possibility is to observe the principal angle of sight formed by the model in the scene from the perspective of the operator operating the computer, without a real character of the principal angle, for example, there is always a game observation perspective in the scene, in which the direction of the principal angle is the direction of the principal angle sight.
After the dominant angular line of sight (camera vertebra) is determined, step S301 is performed: and constructing a grid area with a preset size in the scene by taking the main angle position in the visual cone of the camera as a center. In two dimensions, the extent of the grid area is preferably equal to or greater than the area of the current camera view on the scene plane; from a three-dimensional perspective, the grid region preferably contains the farthest distance that the camera view cone extends in the scene. Therefore, the size of the grid region can be dynamically adjusted, for example, the grid region can be a grid region formed by 64 × 64 pixels in the current rendering activity, and the grid region can be a grid region formed by 32 × 32 pixels in the next rendering activity. The "lattice region" is different from a common static grid (static lattice) which only divides the whole scene, and has the main characteristics that the dynamic property is realized, the dynamic property is always around a main corner, the main corner moves towards the main corner, the lattice region moves along with the main corner, and the potential force range of the main corner is formed, is similar to a magnet and is tightly attached to the main corner, and the dynamic magnetic field lattice is called. The term "dynamic" in this reference indicates the difference between a static grid and a static grid, the static grid is a fixed grid divided in a scene (which may change in the next scene), and usually does not change with the change of the main corner of the scene in a rendering activity, while the dynamic grid moves as the shadow follows the main corner, and the main corner moves, and the area moves. The "magnetic field" in this term indicates its dependence on the principal angle, as two magnets do not separate in the current scene rendering. However, the fact that the dynamic magnetic field grid moves with the principal angle does not mean that the principal angle is at every position in the scene, and the dynamic magnetic field grids (grid regions) must be identical in size, and may be identical or different in practice. In a scene, if there are many models and the models are compact, the grid area can be made smaller, and conversely, the grid area can be made larger. Of course, the dynamic magnetic field grid is the same for each position of the principal angle, which is beneficial for the computer to process information.
After determining the dynamic magnetic field lattice of the principal angle, when the model rendering is required, step S302 may be implemented: determining a scene block file (model file) corresponding to the model covered by the dynamic magnetic field grid according to the position of the principal angle in the scene and the dynamic grid area, and step S303: and loading the land parcel files to render the model in the dynamic grid area in the scene. If the scene is in the moving process, the scene block file of the original position of the main character can be unloaded at the same time. The main angle continuously moves, and the dynamic loading and unloading process is continuously carried out, so that seamless loading and unloading of a super-large scene can be realized. To illustrate the process, reference is continued to the figure. As shown in fig. 5, this figure shows the case where the dynamic magnetic field grid occupies a static grid that is pre-divided in the scene. In the figure, an oversized scene is subjected to grid division (19 × 19 pixels), in the grid-divided scene, the position to which the current principal corner moves is (7, 6) pixel grid, and a grid area of 3 × 3 pixels is formed by taking the principal corner as the center, wherein the grid area comprises 9 static grids of D1, D2, D8 and D9 in total. If the main character moves to another place, the grid area at its center will move around the main character. In the grid area, the grids in which the camera view cones are located are mainly D1, D2, D4, D5. Then, when rendering, the grid outside the rendering can be firstly excluded, and the grid is most of the area outside the grid area, and the models of the grid outside the area are not in the sight range, so that the models are not concerned because the models do not influence the presentation of the current main perspective view, namely are not seen at all. Secondly, even the grids within the grid region of the dynamic magnetic field grid do not all need to be rendered, but only four grids (D1, D2, D4, D5) within the view frustum range. Therefore, the embodiment of the application performs the global range traversal on the 19 × 19 pixel points of the scene to dynamically detect whether a certain model to be rendered is in a huge project of the visual cone range, reduces the range of the 3 × 3 pixel points in the range of the grid region, and finally determines to render the models in 4 grids, so that the number of rendering models is greatly reduced, and the rendering efficiency is remarkably improved.
As described above, the visual vertebra clipping is greatly optimized in the embodiment of the present application, and as long as the dynamic magnetic field grid region is traversed, the bounding box and the camera visual vertebra in the region are detected, which needs to be rendered can be determined: if the dynamic grid is out of the visual field range of the visual vertebra, all static scenes corresponding to the dynamic grid are not subjected to visual vertebra detection, and rendering is not performed; if the dynamic grid is within the visual field range of the visual cone, the visual cone detection of the grid in the static scene can be determined according to requirements. According to the requirement, if the visual vertebra detection is carried out according to the pursuit of the real efficiency, the rendering number can be greatly reduced, and the work can be carried out especially under the condition that the number of models in the visual vertebra range is small; if the visual cone detection is carried out on the image, the contribution to the reduction of the rendering quantity is not large, the working necessity is not strong, and the rendering in the grid area range can be directly carried out.
The above scheme has fully expressed the core content of the present application, but on this basis, further optimization is fully feasible so as to improve the efficiency more significantly. In the above solution, in practical applications, it is considered that the size of the dynamic magnetic field grid is fixed, and the position of the dynamic magnetic field grid relative to the principal angle is also fixed, for example, the dynamic magnetic field grid of 3 × 3 pixels is constructed by taking the position of the principal angle as the center, and the magnetic field grid area is not changed relative to the principal angle no matter where the principal angle moves. Even though the "position" of the main corner with respect to the magnetic field grid region may not be changed, the camera view cone of the main corner within the magnetic field grid region may be changed, and thus the rendering grid performed each time may be different although being limited within the "dynamic magnetic field grid" region. Then, in order to further optimize the efficiency, based on the relationship between the dynamic magnetic field grid and the camera view cone determined at the time of initialization, a KEY value may be set every other interval (for example, every 10 °) according to the conditions of the horizontal angle (0 to 360 °), the vertical angle (0 to 180 °), and the like of the camera view cone, and the dynamic magnetic field grids within the camera view cone range corresponding to each KEY value may be all buffered in an array. When the camera is operated to enable the visual angle of the principal angle to change, the position of the visual cone of the current camera is determined firstly, then a corresponding KEY value is determined, the dynamic magnetic field grid covered by the current camera is searched from a pre-stored array according to the KEY value, and if the dynamic magnetic field grid is cached, the model in the stored dynamic grid can be directly rendered; if the current vertebra is not found, the grid covered by the current vertebra is determined through calculation, and then the model in the grid is rendered. As each state of the visual cone of the camera can be cached according to the preset interval, the corresponding array can be basically found, the grid (in-grid model) to be rendered can be quickly found, and the rendering efficiency is greatly improved.
In the aforementioned optimization direction, no distinction is made between models in lattice regions. In fact, in practical applications, some models are necessarily far from the starting point of the visual cone ray and some models are close to the starting point of the visual cone ray, relative to the visual cone of the camera. It is undeniable that models at different distances have different "images" in the principal viewing line, and since the images are different, there may be differences in rendering. This is a new optimization direction, i.e. the rendering degree is determined according to the distance between the model and the visual cone of the camera. Referring to fig. 6, three identical sheep, close, naturally seen more clearly, far apart, clearly blurred. For the near, it should be rendered in detail for higher definition, and for the far, it only needs to render a large outline in place. For example, in the figure, when the starting point of the video camera visual cone ray is 5 meters away from the sheep, the sheep is rendered with the model fineness of the leftmost side; when the distance is 30 meters, rendering the sheep with the middle model fineness; at a distance of 100 meters, the sheep is rendered with the model fineness at the far right. Thus, when the dynamic magnetic field grid is generated, the rendering level (for example, the rendering level is determined according to the size of the bounding box) of the dynamic magnetic field grid (actually representing the model in the grid) can be calculated at one time according to the distance between the model and the starting point of the visual cone ray of the camera, and each model in the grid area is subjected to the layering processing when being loaded: when the grid rendering level in the "dynamic magnetic field grid" is 1, the LOD with a bounding box volume above XXXm3 is several times the XX scale; when the grid rendering level in the "dynamic magnetic field grid" is 2, the LOD of the bounding box volume above XXXm3 is several times the XX level. By the method, the operation of calculating the rendering level of the dynamic magnetic field grid at one time according to the visual cone distance of the camera is completed when the dynamic magnetic field grid is determined, instead of dynamically calculating the rendering level of each frame of the model entering the visual field range according to the visual cone distance of the camera and the bounding box of the model, the rendering level does not change under the rendering state, the resource loss is greatly reduced, the CPU dynamic calculation is not needed during the operation, and the rendering pressure of the GPU is reduced.
The foregoing details various embodiments of the scene rendering method provided herein. The method can be virtualized as a scene rendering device in the same way as the above. Referring to fig. 7, there is shown an embodiment of a scene rendering apparatus, comprising: a lattice region determining unit U71, a model file determining unit U72, and a rendering unit U73, wherein:
a grid region determination unit U71 for determining a dynamic grid region in the scene containing the range of the dominant camera view, the dynamic grid region being movable as the range of the camera view moves in the scene;
a model file determining unit U72, configured to determine, according to the position of the principal in the scene and the dynamic grid area, a corresponding model file for rendering the model in the dynamic grid area;
and the rendering unit U73 is configured to render the model in the dynamic lattice region in the scene according to the model file.
The above-mentioned embodiments of the apparatus can achieve the same technical effects as the above-mentioned methods, and it is not necessary to describe here for avoiding repetition. The device embodiment may be further optimized. Such as the model file determination unit U72, the function of which may vary in the composition structure in different implementations. Of course, before optimizing it, some conditions should be iterated if necessary to meet the optimization so that the optimization is more reliably achieved. One possible way is that the model file determination unit is implemented by relying on a static grid, the scene should be divided into a plurality of static grids, and each static grid is associated with a corresponding model file, and the static grids and the dynamic grids are the same in size, so that the model file determination unit U72 may include the following sub-units: a static lattice determining subunit U721 and a model file determining subunit U722, wherein:
a static grid determining subunit U721, configured to determine, according to the position of the principal in the scene and the dynamic grid area, a static grid currently covered by the dynamic grid area in the scene;
and the model file determining subunit U722 is configured to determine a respective model file corresponding to each static grid.
In addition, the device embodiment can be further optimized. Such as refining the static grid determination sub-unit. Functionally, the static grid determining subunit may be configured to determine, according to the position of the main angle in the scene, a current posture corresponding to the horizontal angle and/or the vertical angle of the current main angle camera view, in a case where each corresponding posture of the main angle camera view at a predetermined first interval horizontal angle and/or a predetermined second interval vertical angle is associated with a corresponding dynamic grid in a dynamic grid region in advance; according to the current posture and the posture, pre-associating the corresponding dynamic grids, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area; and determining a static grid covered by the dynamic grid in the scene according to the dynamic grid in the visual cone range of the camera. The functional requirements correspond to the corresponding structures, and the corresponding structures can be realized by adopting necessary components.
In addition to the apparatus of the above subject matter, embodiments of the present application also provide an operator. Referring to fig. 8, the diagram shows a schematic structural diagram of an embodiment of an runner, where the runner 80 includes a memory 81, a processor 82, and a computer program stored in the memory 81 and executable on the processor 82, and the computer program, when executed by the processor 82, implements the steps of the above-described block chain-based data processing method. In a similar way, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method of scene rendering, the method comprising:
determining a dynamic grid region in the scene, wherein the dynamic grid region comprises a main angle camera visual cone range, and the dynamic grid region can move along with the movement of the camera visual cone range in the scene;
determining a corresponding model file for rendering a model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area;
and rendering the model in the dynamic grid region in the scene according to the model file.
2. The method according to claim 1, wherein the scene is divided into a plurality of static grids, each static grid is associated with a corresponding model file, the static grids are the same as the dynamic grids in size, and the determining the corresponding model file for rendering the model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area specifically comprises:
determining static grids covered by the dynamic grid area in the scene at present according to the position of the main role in the scene and the dynamic grid area;
and determining respective model files corresponding to the static grids.
3. The method according to claim 2, wherein each corresponding pose of the main character camera view cone at a predetermined first interval horizontal angle and/or a predetermined second interval vertical angle is pre-associated with a corresponding dynamic grid in a dynamic grid area, and wherein determining a static grid currently covered by the dynamic grid area in the scene according to the position of the main character in the scene and the dynamic grid area specifically comprises:
determining the current posture of the visual cone of the current main angle camera at the horizontal angle and/or the vertical angle according to the position of the main angle in the scene;
pre-associating corresponding dynamic grids according to the current posture and the posture, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area;
and determining static grids covered by the dynamic grids in the scene according to the dynamic grids in the visual cone range of the camera.
4. The method of claim 3, further comprising: setting a posture key value for each corresponding posture under a preset first interval horizontal angle and a preset second interval vertical angle and each posture related to a corresponding dynamic grid, and caching the posture key value into an array; the method for determining the dynamic grids in the camera visual cone range in the dynamic grid area by pre-associating the corresponding dynamic grids according to the current posture and the posture specifically comprises the following steps:
determining a posture key value corresponding to the current posture according to the current posture;
and querying the array according to the posture key value, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area.
5. The method of claim 1, further comprising:
when a dynamic grid area including a main angle camera visual cone range is determined in a scene, determining the rendering level of each dynamic grid according to the distance between the starting point of the main angle camera visual cone ray and each dynamic grid in the dynamic grid area, wherein the rendering fineness degrees of different rendering levels are different; the rendering the model in the dynamic lattice region in the scene according to the model file specifically includes:
and rendering the model in the dynamic grid region in the scene according to the model file and the rendering grade.
6. An apparatus for scene rendering, the apparatus comprising: a grid area determination unit, a model file determination unit and a rendering unit, wherein:
the grid area determining unit is used for determining a dynamic grid area containing a main corner camera view vertebra range in a scene, and the dynamic grid area can move along with the movement of the camera view vertebra range in the scene;
the model file determining unit is used for determining a corresponding model file for rendering the model in the dynamic grid area according to the position of the main character in the scene and the dynamic grid area;
and the rendering unit is used for rendering the model in the dynamic grid region in the scene according to the model file.
7. The apparatus of claim 6, wherein the scene is divided into a number of static grids, each static grid being associated with a corresponding model file, the static grids being the same size as the dynamic grids, the model file determination unit comprising a static grid determination subunit and a model file determination subunit, wherein:
the static grid determining subunit is configured to determine, according to the position of the main character in the scene and the dynamic grid area, a static grid currently covered by the dynamic grid area in the scene;
and the model file determining subunit is used for determining respective model files corresponding to the static lattices.
8. The apparatus of claim 7, wherein each pose of the main camera view corresponding to a predetermined first interval of horizontal angles and/or a predetermined second interval of vertical angles is pre-associated with a corresponding dynamic grid in a corresponding dynamic grid region,
the static grid determining subunit is used for determining the current posture of the visual cone of the current main-angle camera at the horizontal angle and/or the vertical angle according to the position of the main angle in the scene; and
according to the current posture and the posture, pre-associating the corresponding dynamic grids, and determining the dynamic grids in the visual cone range of the camera in the dynamic grid area; and
and determining static grids covered by the dynamic grids in the scene according to the dynamic grids in the visual cone range of the camera.
9. A runner, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the method according to any of claims 1 to 5.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202111487651.4A 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium Active CN114627221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111487651.4A CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111487651.4A CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Publications (2)

Publication Number Publication Date
CN114627221A true CN114627221A (en) 2022-06-14
CN114627221B CN114627221B (en) 2023-11-10

Family

ID=81898904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111487651.4A Active CN114627221B (en) 2021-12-08 2021-12-08 Scene rendering method and device, operator and readable storage medium

Country Status (1)

Country Link
CN (1) CN114627221B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080309676A1 (en) * 2007-06-14 2008-12-18 Microsoft Corporation Random-access vector graphics
KR20130015320A (en) * 2011-08-03 2013-02-14 (주)마이어스게임즈 Gaem engine, game authoring tool, game processing method, record media of the same
US9519986B1 (en) * 2013-06-20 2016-12-13 Pixar Using stand-in camera to determine grid for rendering an image from a virtual camera
CN106569816A (en) * 2016-10-26 2017-04-19 搜游网络科技(北京)有限公司 Rendering method and apparatus
CN107705364A (en) * 2016-08-08 2018-02-16 国网新疆电力公司 A kind of immersion virtual display system based on three-dimensional geographic information
CN109011559A (en) * 2018-07-26 2018-12-18 网易(杭州)网络有限公司 Control method, device, equipment and the storage medium of virtual carrier in game
JP6513241B1 (en) * 2018-01-30 2019-05-15 株式会社コロプラ PROGRAM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
US20190260976A1 (en) * 2018-02-20 2019-08-22 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20190355325A1 (en) * 2016-11-14 2019-11-21 Huawei Technologies Co., Ltd. Image Rendering Method and Apparatus, and VR Device
CN111127615A (en) * 2019-12-26 2020-05-08 四川航天神坤科技有限公司 Data scheduling method and device of three-dimensional model and electronic equipment
CN111354067A (en) * 2020-03-02 2020-06-30 成都偶邦智能科技有限公司 Multi-model same-screen rendering method based on Unity3D engine
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
US20200408558A1 (en) * 2018-08-24 2020-12-31 Tencent Technolgy (Shenzhen) Company Limited Map rendering method and apparatus, computer device, and storage medium
US20210029340A1 (en) * 2019-07-28 2021-01-28 Google Llc Methods, systems, and media for rendering immersive video content with foveated meshes
CN112396682A (en) * 2020-11-17 2021-02-23 重庆市地理信息和遥感应用中心 Visual progressive model browsing method in three-dimensional scene
CN112691381A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Rendering method, device and equipment of virtual scene and computer readable storage medium
CN113457137A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium
CN113570691A (en) * 2020-04-27 2021-10-29 北京蓝亚盒子科技有限公司 Storage optimization method and device for voxel model and electronic equipment

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080309676A1 (en) * 2007-06-14 2008-12-18 Microsoft Corporation Random-access vector graphics
KR20130015320A (en) * 2011-08-03 2013-02-14 (주)마이어스게임즈 Gaem engine, game authoring tool, game processing method, record media of the same
US9519986B1 (en) * 2013-06-20 2016-12-13 Pixar Using stand-in camera to determine grid for rendering an image from a virtual camera
CN107705364A (en) * 2016-08-08 2018-02-16 国网新疆电力公司 A kind of immersion virtual display system based on three-dimensional geographic information
CN106569816A (en) * 2016-10-26 2017-04-19 搜游网络科技(北京)有限公司 Rendering method and apparatus
US20190355325A1 (en) * 2016-11-14 2019-11-21 Huawei Technologies Co., Ltd. Image Rendering Method and Apparatus, and VR Device
JP6513241B1 (en) * 2018-01-30 2019-05-15 株式会社コロプラ PROGRAM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
US20190260976A1 (en) * 2018-02-20 2019-08-22 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN109011559A (en) * 2018-07-26 2018-12-18 网易(杭州)网络有限公司 Control method, device, equipment and the storage medium of virtual carrier in game
US20200408558A1 (en) * 2018-08-24 2020-12-31 Tencent Technolgy (Shenzhen) Company Limited Map rendering method and apparatus, computer device, and storage medium
US20210029340A1 (en) * 2019-07-28 2021-01-28 Google Llc Methods, systems, and media for rendering immersive video content with foveated meshes
CN111127615A (en) * 2019-12-26 2020-05-08 四川航天神坤科技有限公司 Data scheduling method and device of three-dimensional model and electronic equipment
CN111354067A (en) * 2020-03-02 2020-06-30 成都偶邦智能科技有限公司 Multi-model same-screen rendering method based on Unity3D engine
CN113570691A (en) * 2020-04-27 2021-10-29 北京蓝亚盒子科技有限公司 Storage optimization method and device for voxel model and electronic equipment
CN113509721A (en) * 2020-06-18 2021-10-19 完美世界(北京)软件科技发展有限公司 Shadow data determination method, device, equipment and readable medium
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment
CN112396682A (en) * 2020-11-17 2021-02-23 重庆市地理信息和遥感应用中心 Visual progressive model browsing method in three-dimensional scene
CN112691381A (en) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 Rendering method, device and equipment of virtual scene and computer readable storage medium
CN113457137A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
CN114627221B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US8970580B2 (en) Method, apparatus and computer-readable medium rendering three-dimensional (3D) graphics
CN110136082B (en) Occlusion rejection method and device and computer equipment
US11069124B2 (en) Systems and methods for reducing rendering latency
US9569885B2 (en) Technique for pre-computing ambient obscurance
US7737974B2 (en) Reallocation of spatial index traversal between processing elements in response to changes in ray tracing graphics workload
US10748332B2 (en) Hybrid frustum traced shadows systems and methods
US20200043219A1 (en) Systems and Methods for Rendering Optical Distortion Effects
US9013479B2 (en) Apparatus and method for tile-based rendering
JP4977712B2 (en) Computer graphics processor and method for rendering stereoscopic images on a display screen
JP5634104B2 (en) Tile-based rendering apparatus and method
CN105096375B (en) Image processing method and apparatus
JP4284285B2 (en) Image processing apparatus, image processing method, and image processing program
CN109377552B (en) Image occlusion calculating method, device, calculating equipment and storage medium
US20210327122A1 (en) Methods and apparatus for efficient multi-view rasterization
US9858709B2 (en) Apparatus and method for processing primitive in three-dimensional (3D) graphics rendering system
CN116261740A (en) Compressing texture data on a per channel basis
JP2018032938A (en) Image processing apparatus, image processing method and program
CN114627221B (en) Scene rendering method and device, operator and readable storage medium
CN114596195A (en) Topographic data processing method, system, device and computer storage medium
WO2019042272A2 (en) System and method for multi-view rendering
GB2605360A (en) Method, Apparatus and Storage Medium for Realizing Geometric Viewing Frustum of OCC Tree in Smart City
CN116670719A (en) Graphic processing method and device and electronic equipment
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
US10255717B2 (en) Geometry shadow maps with per-fragment atomics
CN116188552B (en) Region-based depth test method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant