CN112184864B - Real-time drawing method of million-magnitude three-dimensional situation target - Google Patents

Real-time drawing method of million-magnitude three-dimensional situation target Download PDF

Info

Publication number
CN112184864B
CN112184864B CN202010933917.2A CN202010933917A CN112184864B CN 112184864 B CN112184864 B CN 112184864B CN 202010933917 A CN202010933917 A CN 202010933917A CN 112184864 B CN112184864 B CN 112184864B
Authority
CN
China
Prior art keywords
situation
longitude
latitude
target
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010933917.2A
Other languages
Chinese (zh)
Other versions
CN112184864A (en
Inventor
占伟伟
李佳祺
蒉露超
李坪泽
马宁
袁思佳
王辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN202010933917.2A priority Critical patent/CN112184864B/en
Publication of CN112184864A publication Critical patent/CN112184864A/en
Application granted granted Critical
Publication of CN112184864B publication Critical patent/CN112184864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a real-time drawing method of a million-level three-dimensional situation target in the field of computer graphics, which comprises the following steps: establishing a regular longitude and latitude grid index aiming at a million-magnitude situation target in the auxiliary thread, updating the regular longitude and latitude grid index at regular time according to returned data of the situation target, and returning an index result to the main thread; the method comprises the steps that visible situation targets in a scene are incrementally scheduled in a main thread, namely when the roaming height is smaller than a preset height, the situation targets needing to be displayed are determined; applying and recovering a graphic object of a situation target in a roaming process; and performing efficient rendering on the graphic object through the image processor to obtain a three-dimensional situation drawing result of the situation target. The method can realize the smooth rendering of the million-magnitude three-dimensional situation target, has no obvious pause phenomenon when the scene is zoomed and roams, and obviously improves the real-time rendering efficiency and the smoothness of user operation compared with the method of directly drawing by using a rendering engine.

Description

Real-time drawing method of million-magnitude three-dimensional situation target
Technical Field
The invention relates to the field of computer graphics, in particular to a real-time drawing method of a million-level three-dimensional situation target.
Background
With the development of the strong ocean and national strategy, the maritime activities of China are moving to the world, and simultaneously, new requirements on the situation display capability and the bearing capacity of the maritime activity targets in the three-dimensional digital earth are provided. According to statistics, the number of civil ships, military ships and international ships with more than 300 total tons in China exceeds 100 thousands, and the number of maritime activity ships reaches 50 to 60 thousands. Three-dimensional display of targets of this magnitude is a challenging task, on one hand, a large number of Graphics objects will cause huge computational resource overhead, and on the other hand, the performance of a Graphics Processing Unit (GPU) is difficult to meet the real-time rendering requirement of a large number of primitives.
In a conventional drawing method, all objects are created in a memory, whether each object is visible or not is judged in each frame, and a rendering engine is used for directly drawing the visible objects. Whether the target is in the view port or not, a group of graphic objects corresponds to the target one by one, and when the number reaches the million, a large amount of memory resources are occupied. Meanwhile, the visible judgment, the graph construction and the frame-by-frame updating of the target are intensive calculation of a Central Processing Unit (CPU), and the calculation amount of millions of targets brings great workload to the CPU and seriously affects the smoothness of system operation. In addition, the performance of the GPU also becomes a bottleneck in real-time rendering of massive three-dimensional graphics, frequent and trivial graphics rendering requests will generate a large number of rendering batches, and excessive data transmission will cause the performance of the GPU to be not fully exerted, resulting in a lower rendering frame rate.
In terms of three-dimensional visual representation of a situation target, a point icon-based method and a density map-based method are commonly used. In the method based on the point icons, the target object is represented by dots, the situation information of the target is expressed by utilizing the size and the color characteristics of the dots, however, when the number of the dots reaches the million level and the scene range is limited, the icons are overlapped and crowded, and the visual expression is disordered. The density map-based method uses colors to code the density of the target distributed in the geographic space, is realized on the basis of statistical preprocessing, cannot meet the real-time requirement of million-magnitude data, and simultaneously loses the spatial information of a target entity. Therefore, on a global scale, million-magnitude data needs to be comprehensively and simply expressed, and the number of points is reduced, and the spatial distribution characteristics of the target can be reflected.
On the other hand, when the camera height is reduced and the observation range is reduced, specific information and corresponding graphics of a single target need to be displayed, and a key problem in this process is how to quickly filter and create a target graphic object visible in the viewport. When the view angle of the camera changes, a large amount of computing resources are consumed for visually judging all target points in each frame in a traversing manner, and the scene roaming is not smooth. Furthermore, frequent creation and deletion of target graphical objects should be avoided, reusing existing graphics as much as possible, to reduce unnecessary performance overhead.
In summary, a solution capable of breaking through the performance bottleneck is urgently needed to meet the real-time drawing and smooth scene scheduling of the million-magnitude three-dimensional situation target.
Disclosure of Invention
The invention aims to provide a large-batch three-dimensional object drawing strategy which can be used for rendering in real time and smoothly operating scenes aiming at situation data of million-magnitude maritime moving targets in the global ocean.
The technical solution for realizing the purpose of the invention is as follows:
a real-time drawing method of a million-magnitude three-dimensional situation target comprises the following steps:
step 1, under a global scale, establishing a regular longitude and latitude grid index for a million-magnitude situation target in an auxiliary thread, simultaneously updating the regular longitude and latitude grid index at regular time according to returned data of the situation target, and returning an index result to a main thread;
step 2, under a local scale, incrementally scheduling visible situation targets in a scene in the main thread, namely determining the situation targets needing to be displayed when the roaming height is smaller than a preset height;
step 3, applying and recovering a graphic object of the situation target in the roaming process, wherein the graphic object is used for expressing the situation target;
and 4, performing efficient rendering on the graphic object through an image processor to obtain a three-dimensional situation drawing result of the situation target.
Further, in one implementation, the step 1 includes: and under the global scale, namely when the roaming height is greater than or equal to a preset height, comprehensively expressing a plurality of situation targets in each regular longitude and latitude grid by using a point symbol.
Further, in one implementation, the step 1 includes:
1-1, dividing a global range into longitude and latitude grids of 1 degree multiplied by 1 degree under the global scale, dividing all situation targets into corresponding longitude and latitude grids according to the geographic positions of the situation targets, respectively calculating the longitude and latitude of all situation targets in each longitude and latitude grid to obtain an average value, calculating to obtain a new longitude and latitude as the positions of the point symbols so as to represent all situation targets in the longitude and latitude grids, and setting the radius of the point symbols according to the number of the situation targets in a grading manner;
step 1-2, constructing a bidirectional mapping relation between the longitude and latitude grid and a situation target, namely a regular longitude and latitude grid index of the situation target;
step 1-3, counting the positions of all the point symbols and the radius of the point symbols according to a preset rule, generating a counting result, constructing a vertex cache of a graph according to the counting result, and putting the point symbols into a rendering queue to wait for a graph processor to render;
and 1-4, updating the regular longitude and latitude grid index of the situation target at regular intervals in the auxiliary thread according to returned data of the situation target, and returning the updated regular longitude and latitude grid index of the situation target to the main thread for scene scheduling in the subsequent steps, wherein the regular time is the average time interval of returned situation target data.
Further, in one implementation, the step 2 includes: under the local scale, when the camera posture changes, firstly, acquiring a maximized visible longitude and latitude range through the current camera posture in a three-dimensional scene, and quickly traversing a regular longitude and latitude grid falling within the maximized visible longitude and latitude range; further screening the regular longitude and latitude grids falling within the maximized visible longitude and latitude range by utilizing a view cone to obtain a visible situation target; and finally, comparing the target set of the visible situation targets with the target set of the visible situation targets before the posture of the camera changes, and determining the situation targets moving into and out of the viewport of the camera.
Further, in an implementation manner, the step 2 includes:
step 2-1, acquiring a maximized visible longitude and latitude range according to the current posture of the camera;
step 2-2, judging whether the regular longitude and latitude grids are within the maximized visible longitude and latitude range one by one, and judging all situation targets contained in the regular longitude and latitude grids aiming at the regular longitude and latitude grids which are completely within the maximized visible longitude and latitude range and partially within the maximized visible longitude and latitude range;
step 2-3, judging whether the situation targets are in a view cone of a camera one by one, if so, determining that the situation targets are visible situation targets, and adding the visible situation targets into a target set of the visible situation targets;
and 2-4, comparing the target set of the visible situation targets with the target set before the posture of the camera is changed, and acquiring the situation targets newly moved into the viewport and the situation targets moved out of the viewport.
Further, in one implementation, the step 2-1 includes:
step 2-1-1, calculating direction vectors of four edges of the view cone body by utilizing the Cartesian coordinates, the vertical field angle, the view port width-height ratio and the sight line direction of the camera;
step 2-1-2, calculating four direction vectors of a view port, namely a right upper direction vector, a right lower direction vector, a right left direction vector and a right direction vector according to the directions of the four edges;
step 2-1-3, intersecting the 8 directions obtained in the step 2-1-1 and the step 2-1-2 with the three-dimensional earth by using a cosine law to obtain 8 intersection points, wherein if a certain direction does not intersect with the three-dimensional earth, a vector corresponding to the direction is rotated by a certain angle around a camera point to a sight line direction, so that the vector after rotation adjustment is tangent to the earth, and the tangent point is used as the intersection point;
2-1-4, converting the acquired Cartesian coordinates of the 8 intersection points into longitude and latitude coordinates, and solving the maximum value and the minimum value of the longitude and the latitude, wherein a rectangular range enclosed by the maximum value and the minimum value of the longitude and the latitude is a maximized visible longitude and latitude range;
step 2-1-5, if the north pole or the south pole is visible in the maximized visible longitude and latitude range, correcting the maximized visible longitude and latitude range, and correcting the longitude range of the maximized visible longitude and latitude range to be-180 degrees to 180 degrees;
if the north pole is visible in the maximized visible longitude and latitude range, correcting the latitude range of the maximized visible longitude and latitude range into a minimum latitude value to 90 degrees; and if the south pole is visible in the maximized visible latitude and longitude range, modifying the latitude range of the maximized visible latitude and longitude range to be-90 degrees to the maximum latitude value.
Further, in one implementation, the step 3 includes:
step 3-1, constructing a corresponding object pool for each graphic object, wherein the graphic objects comprise icon graphics, sign graphics and model graphics;
step 3-2, applying for an available graphic object from an object pool corresponding to the current roaming height for each situation target moved into the viewport, setting situation information of the situation target, and adding the graphic object into a scene for rendering;
and 3-3, returning the graphic object corresponding to the situation target to an object pool for each situation target moved out of the viewport, and setting the state of the graphic object to be an available state.
Further, in an implementation manner, the situation information of the situation target in step 3 includes a batch number, a speed, and a heading, and different situation information of each situation target is expressed by using different graphical objects respectively.
Further, in one implementation, the step 4 includes: merging a plurality of graphic objects in each object pool into the same batch by using an instant technology and rendering; if the drawing quantity of the primitives in a certain batch reaches the upper limit, a new batch of loading graphic objects is created, and meanwhile, complex triangular surface construction, horizontal line clipping calculation and normal vector calculation are transplanted into a graphic processor by utilizing a graphic processor programming technology, and finally the three-dimensional situation of the target is drawn on a screen.
Further, in an implementation manner, the step 4 includes:
step 4-1, merging all the graphic objects to be rendered in each object pool into the same batch;
step 4-2, judging whether the position of the graphic object to be rendered is shielded by the earth in a vertex shader, namely, leading a tangent line from the position of a camera to the earth, calculating the distance from the camera to the tangent point to be the farthest visible distance, and if the distance from the camera to the graphic object is smaller than the farthest visible distance, determining that the position of the graphic object is not shielded by the earth;
4-3, performing triangular patch expansion in a geometric shader aiming at icon graphs and label graphs in the graphic objects to form a rectangle which is composed of 2 triangles and has a specified pixel width and a specified pixel height; calculating a normal vector of a triangular surface in a geometric shader for illumination calculation aiming at a model graph in the graph object;
and 4-4, multiplying the vertex of each graphic object by a world matrix and a viewport projection matrix to obtain a final rendered screen coordinate, and coloring pixels in a pixel shader according to textures to obtain the three-dimensional situation drawing result.
Compared with the prior art, the invention has the following remarkable advantages: (1) million-level targets are divided by utilizing regular longitude and latitude grids, and point symbols are used for comprehensive simplified expression, so that the number of symbols needing to be displayed under the global scale is greatly reduced, the rendering efficiency is improved, and the visual transmission of information is enhanced; (2) an algorithm for acquiring a visible longitude and latitude range according to the camera posture is provided, grids falling within the range can be quickly traversed by combining regular longitude and latitude grid indexes, and the calculation performance consumed by visual centrum visual judgment is further reduced; (3) optimizing scene scheduling, performing increment creation and deletion on target graphic objects in a view port, and reusing the graphic objects as much as possible by means of an object pool technology, thereby reducing the time overhead of frequently opening and destroying memory space; (4) all the graphs in the same object pool are merged into one batch by means of an instant technology and are rendered, and meanwhile, complex operations such as triangle surface construction, horizon cutting and the like are calculated in a GPU stage in a parallel mode, so that the rendering efficiency of three-dimensional graphs is greatly improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic workflow diagram of a method for real-time rendering of a million-scale three-dimensional situation target according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a workflow of updating a grid index in a method for real-time rendering a million-scale three-dimensional situation target according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a calculation of vectors of edges of a visual cone in the method for real-time rendering a million-scale three-dimensional situation target according to the embodiment of the present invention;
fig. 4 is a schematic diagram of calculating intersection between a vector emitted from a viewpoint and the earth in the real-time rendering method for a million-scale three-dimensional situation target according to the embodiment of the present invention;
fig. 5 is a schematic calculation diagram of a GPU in the real-time rendering method for a million-scale three-dimensional situation target according to the embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The embodiment of the invention discloses a method for drawing a million-magnitude three-dimensional situation target in real time.
As shown in fig. 1, the method for real-time rendering a million-scale three-dimensional situation target according to this embodiment includes the following steps:
step 1, under a global scale, establishing a regular longitude and latitude grid index for a million-magnitude situation target in an auxiliary thread, simultaneously updating the regular longitude and latitude grid index at regular time according to returned data of the situation target, and returning an index result to a main thread; specifically, in this embodiment, the preset height is a height preset by the user as needed under the global scale, that is, under the scale when the roaming height is greater than the preset height.
Step 2, under a local scale, incrementally scheduling visible situation targets in a scene in the main thread, namely determining the situation targets needing to be displayed when the roaming height is smaller than a preset height;
step 3, applying and recovering a graphic object of the situation target in the roaming process, wherein the graphic object is used for expressing the situation target;
and 4, performing efficient rendering on the graphic object through an image processor to obtain a three-dimensional situation drawing result of the situation target.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 1 includes: and under the global scale, namely when the roaming height is greater than or equal to a preset height, comprehensively expressing a plurality of situation targets in each regular longitude and latitude grid by using a point symbol.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 1 includes:
1-1, dividing a global range into longitude and latitude grids of 1 degree multiplied by 1 degree under the global scale, dividing all situation targets into corresponding longitude and latitude grids according to the geographic positions of the situation targets, respectively calculating the longitude and latitude of all situation targets in each longitude and latitude grid to obtain an average value, calculating to obtain a new longitude and latitude as the positions of the point symbols so as to represent all situation targets in the longitude and latitude grids, and setting the radius of the point symbols according to the number of the situation targets in a grading manner;
specifically, in this embodiment, the global scope is divided into regular longitude and latitude grids of 1 × 1 degree, 360 × 180 grids are total, all the targets are divided into corresponding grids according to the geographic positions of the targets, an average value of the longitudes and latitudes of all the targets in each grid is obtained to serve as the positions of representative points, the radii of point symbols are set in a grading manner according to the number of the targets, the radius is 1 pixel when the number of the targets in the grid is 1-5, the radius is 2 pixels when the number of the targets in the grid is 6-10, and so on.
Step 1-2, constructing a bidirectional mapping relation between the longitude and latitude grid and a situation target, namely a regular longitude and latitude grid index of the situation target; in this embodiment, through this step, each mesh records the target included therein, that is, a mapping from the mesh to the target is established for high visibility determination in a subsequent scene scheduling process; meanwhile, each target records the grid where the target is located, namely, the mapping from the target to the grid is established for rapidly updating the structure of the spatial grid index.
Step 1-3, counting the positions of all the point symbols and the radius of the point symbols according to a preset rule, generating a counting result, constructing a vertex cache of a graph according to the counting result, and putting the point symbols into a rendering queue to wait for a graph processor to render; in this embodiment, the preset rule is, for example, counted in the order of X-Y-Z-radius, so as to construct a vertex buffer area of the graph.
And 1-4, updating the regular longitude and latitude grid index of the situation target at regular intervals in the auxiliary thread according to returned data of the situation target, and returning the updated regular longitude and latitude grid index of the situation target to the main thread for scene scheduling in the subsequent steps, wherein the regular time is the average time interval of returned situation target data. In this embodiment, the certain time is an average time interval of returning the posture target data, which is 5 minutes in this embodiment, and the number of targets included in each update is different according to the actual situation, and is only a small part of the total number, and the position and size of the dot symbol need to be updated. As shown in fig. 2, for each position updated object, if it is still in the original grid, the position of the point symbol is recalculated according to the mapping relationship, and the size is unchanged; if it moves to the adjacent grid, the target is removed from the original grid, the number of targets contained in it is reduced by one, and the target is added to the new grid, the number of targets contained in it is increased by one, and the two grids recalculate the position of the representative point. And updating the grid index in the auxiliary thread at regular intervals, and returning an index result to the main thread.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 2 includes: under the local scale, when the camera posture changes, firstly, acquiring a maximized visible longitude and latitude range through the current camera posture in a three-dimensional scene, and quickly traversing a regular longitude and latitude grid falling within the maximized visible longitude and latitude range; further screening the regular longitude and latitude grids falling within the maximized visible longitude and latitude range by utilizing a view cone to obtain a visible situation target; and finally, comparing the target set of the visible situation targets with the target set of the visible situation targets before the posture of the camera changes, and determining the situation targets moving into and out of the viewport of the camera.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 2 includes:
step 2-1, acquiring a maximized visible longitude and latitude range according to the current posture of the camera;
step 2-2, judging whether the regular longitude and latitude grids are within the maximized visible longitude and latitude range one by one, and judging all situation targets contained in the regular longitude and latitude grids according to the regular longitude and latitude grids which are completely within the maximized visible longitude and latitude range and partially within the maximized visible longitude and latitude range in the next step; in this embodiment, the user operates the three-dimensional scene to cause the change of the camera pose, and according to the difference of the camera pose, a plurality of targets are not in the visual vertebral body, and the result needs to be further judged.
Step 2-3, judging whether the situation targets are in a view cone of a camera one by one, if so, determining that the situation targets are visible situation targets, and adding the visible situation targets into a target set of visible situation targets;
and 2-4, comparing the target set of the visible situation targets with the target set before the posture of the camera is changed, and acquiring the situation targets newly moved into the viewport and the situation targets moved out of the viewport.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 2-1 includes:
step 2-1-1, calculating direction vectors of four edges of the view cone body by utilizing the Cartesian coordinates, the vertical field angle, the view port width-height ratio and the sight line direction of the camera;
in this step, the direction vectors of the four edges of the view cone are calculated by using the cartesian coordinates of the camera, the vertical field angle FovY and the view port width-to-height ratio Aspect, and the specific calculation mode is shown in fig. 3. Wherein Eye is the viewpoint, i.e. the camera described in this embodiment,
Figure GDA0002725998900000081
respectively the direction vectors of the upper direction, the right direction, the front direction and the upper right edge of the observation coordinate system
Figure GDA0002725998900000082
The calculation formula of (a) is as follows:
Figure GDA0002725998900000091
direction vector of upper left edge
Figure GDA0002725998900000092
The calculation formula of (a) is as follows:
Figure GDA0002725998900000093
by analogy, the direction vectors of other two edges can be calculated
Figure GDA0002725998900000094
And
Figure GDA0002725998900000095
step 2-1-2, calculating four direction vectors of right upper, right lower, right left and right of the viewport according to the directions of the four edges; in this step, the view port right-up is calculated according to the directions of the four edges
Figure GDA0002725998900000096
Just under it
Figure GDA0002725998900000097
Right left side
Figure GDA0002725998900000098
And right side
Figure GDA0002725998900000099
Four direction vectors, e.g. right up of view port
Figure GDA00027259989000000910
Is composed of
Figure GDA00027259989000000911
And
Figure GDA00027259989000000912
the angle of (d) bisects the vector.
Step 2-1-3, intersecting the 8 directions obtained in the step 2-1-1 and the step 2-1-2 with the three-dimensional earth by using a cosine law to obtain 8 intersection points, wherein if a certain direction does not intersect with the three-dimensional earth, a vector corresponding to the direction is rotated by a certain angle around a camera point to a sight line direction, so that the vector after rotation adjustment is tangent to the earth, and the tangent point is used as the intersection point; in the present step, the first step is carried out,as shown in FIG. 4, assume directions
Figure GDA00027259989000000913
Intersecting Earth, the distance from the viewpoint Eye to the Center of sphere Center is known
Figure GDA00027259989000000914
Radius of the earth R and
Figure GDA00027259989000000915
and
Figure GDA00027259989000000916
the distance from the viewpoint to the intersection point can be obtained by using the cosine law
Figure GDA00027259989000000917
Further, the intersection point P is obtained 1 The coordinates of (a). If a certain direction
Figure GDA00027259989000000918
Disjoint to the earth, the vector is then used
Figure GDA00027259989000000919
Around the viewpoint to
Figure GDA00027259989000000920
Rotating the direction by a certain angle to make the adjusted vector tangent with the earth and make the tangent point P 2 As the intersection point.
2-1-4, converting the acquired Cartesian coordinates of the 8 intersection points into longitude and latitude coordinates, and solving the maximum value and the minimum value of the longitude and the latitude, wherein a rectangular range enclosed by the maximum value and the minimum value of the longitude and the latitude is a maximized visible longitude and latitude range;
step 2-1-5, if the north pole or the south pole is visible in the maximized visible longitude and latitude range, correcting the maximized visible longitude and latitude range, and correcting the longitude range of the maximized visible longitude and latitude range to be-180 degrees to 180 degrees;
if the north pole is visible in the maximized visible longitude and latitude range, correcting the latitude range of the maximized visible longitude and latitude range into a minimum latitude value to 90 degrees; and if the south pole is visible in the maximized visible latitude and longitude range, correcting the latitude range of the maximized visible latitude and longitude range to be-90 degrees to the maximum latitude value.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 3 includes:
step 3-1, constructing a corresponding object pool for each graphic object, wherein the graphic objects comprise icon graphics, label graphics and model graphics;
step 3-2, applying for an available graphic object from an object pool corresponding to the current roaming height for each situation target moved into the viewport, setting situation information of the situation target, and adding the graphic object into a scene for rendering;
and 3-3, returning the graphic object corresponding to each situation target moved out of the viewport to an object pool, and setting the state of the graphic object to be an available state. In this embodiment, when no object is provided in the object pool, 1000 objects are created at a time and added to the object pool, so as to avoid frequently opening up a memory space.
In the method for drawing the million-level three-dimensional situation target in real time according to this embodiment, the situation information of the situation target in step 3 includes a batch number, a speed, and a heading, and different situation information of each situation target is expressed by using different graphical objects respectively.
In the method for real-time rendering of a million-scale three-dimensional situation target according to this embodiment, the step 4 includes: merging a plurality of graphic objects in each object pool into the same batch by using an instant technology and rendering; if the drawing quantity of the primitives in a certain batch reaches the upper limit, a new batch of loading graphic objects is created, and meanwhile, complex triangular surface construction, horizontal line clipping calculation and normal vector calculation are transplanted into a graphic processor by utilizing a graphic processor programming technology, and finally the three-dimensional situation of the target is drawn on a screen.
As shown in fig. 5, in the real-time rendering method for a million-scale three-dimensional situation target according to this embodiment, the step 4 includes:
step 4-1, merging all the graphic objects to be rendered in each object pool into the same batch;
step 4-2, judging whether the position of the graphic object to be rendered is shielded by the earth in a vertex shader, namely, leading a tangent line from the position of a camera to the earth, calculating the distance from the camera to the tangent point to be the farthest visible distance, and if the distance from the camera to the graphic object is smaller than the farthest visible distance, determining that the position of the graphic object is not shielded by the earth;
4-3, performing triangular patch expansion in a geometric shader aiming at icon graphs and label graphs in the graphic objects to form a rectangle which is composed of 2 triangles and has a specified pixel width and a specified pixel height; calculating a normal vector of a triangular surface in a geometric shader for illumination calculation aiming at a model graph in the graph object;
and 4-4, multiplying the vertex of each graphic object by a world matrix and a viewport projection matrix to obtain a final rendered screen coordinate, and coloring the pixel in a pixel shader according to the texture to obtain the three-dimensional situation drawing result.
In the embodiment, about a million targets are displayed, and the number of point symbols after grid division is about sixty thousand on a macro scale; at a local scale, the maximum visible number of three graphics is about ten thousand. The average frame rate is about 50FPS in the whole scene browsing process, and the rapid rendering and smooth operation of the scene can be met.
Compared with the prior art, the invention has the following remarkable advantages: (1) million-level targets are divided by utilizing regular longitude and latitude grids, and point symbols are used for comprehensive simplified expression, so that the number of symbols needing to be displayed under the global scale is greatly reduced, the rendering efficiency is improved, and the visual transmission of information is enhanced; (2) an algorithm for acquiring a visible longitude and latitude range according to the camera posture is provided, grids falling within the range can be quickly traversed by combining regular longitude and latitude grid indexes, and the calculation performance consumed by visual centrum visual judgment is further reduced; (3) optimizing scene scheduling, performing increment creation and deletion on target graphic objects in a view port, and reusing the graphic objects as much as possible by means of an object pool technology, thereby reducing the time overhead of frequently opening and destroying memory space; (4) all the graphs in the same object pool are merged into one batch to be rendered together by means of an instant technology, and meanwhile, complex operations such as triangle surface construction, horizon line cutting and the like are subjected to parallel computation in a GPU stage, so that the rendering efficiency of three-dimensional graphs is greatly improved.
In specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and when the program is executed, the program may include some or all of the steps in each embodiment of the method for real-time rendering of a million-scale three-dimensional situation object provided by the present invention. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (9)

1. A real-time drawing method of a million-magnitude three-dimensional situation target is characterized by comprising the following steps:
step 1, under a global scale, establishing a regular longitude and latitude grid index for a million-magnitude situation target in an auxiliary thread, simultaneously updating the regular longitude and latitude grid index at regular time according to returned data of the situation target, and returning an index result to a main thread;
step 2, under a local scale, incrementally scheduling visible situation targets in a scene in the main thread, namely determining the situation targets needing to be displayed when the roaming height is smaller than a preset height;
step 3, applying and recovering a graphic object of the situation target in the roaming process, wherein the graphic object is used for expressing the situation target;
step 4, performing efficient rendering on the graphic object through an image processor to obtain a three-dimensional situation drawing result of the situation target;
the step 3 comprises the following steps:
step 3-1, constructing a corresponding object pool for each graphic object, wherein the graphic objects comprise icon graphics, label graphics and model graphics;
step 3-2, applying for an available graphic object from an object pool corresponding to the current roaming height for each situation target moved into the viewport, setting situation information of the situation target, and adding the graphic object into a scene for rendering;
and 3-3, returning the graphic object corresponding to each situation target moved out of the viewport to an object pool, and setting the state of the graphic object to be an available state.
2. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 1, wherein the step 1 comprises: and under the global scale, namely when the roaming height is greater than or equal to a preset height, comprehensively expressing a plurality of situation targets in each regular longitude and latitude grid by using a point symbol.
3. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 2, wherein the step 1 comprises:
1-1, dividing a global range into longitude and latitude grids of 1 degree multiplied by 1 degree under the global scale, dividing all the situation targets into corresponding longitude and latitude grids according to the geographic positions of the situation targets, respectively calculating the longitude and latitude of all the situation targets in each longitude and latitude grid to obtain an average value, calculating to obtain new longitude and latitude as the positions of the point symbols so as to represent all the situation targets in the longitude and latitude grids, and setting the radius of the point symbols according to the number of the situation targets in a grading manner;
step 1-2, constructing a bidirectional mapping relation between the longitude and latitude grids and a situation target, namely a regular longitude and latitude grid index of the situation target;
step 1-3, counting the positions of all the point symbols and the radius of the point symbols according to a preset rule, generating a statistical result, constructing a vertex cache of a graph according to the statistical result, and putting the point symbols into a rendering queue to wait for rendering of a graph processor;
and 1-4, updating the regular longitude and latitude grid index of the situation target according to returned data of the situation target at set time intervals in the auxiliary thread, and returning the updated regular longitude and latitude grid index of the situation target to the main thread for scene scheduling in subsequent steps, wherein the set time is the average time interval for returning the situation target data.
4. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 1, wherein the step 2 comprises: under the local scale, when the camera posture changes, firstly, acquiring a maximized visible longitude and latitude range through the current camera posture in a three-dimensional scene, and quickly traversing a regular longitude and latitude grid falling within the maximized visible longitude and latitude range; further screening the regular longitude and latitude grids falling within the maximized visible longitude and latitude range by utilizing a view cone to obtain a visible situation target; and finally, comparing the target set of the visible situation targets with the target set of the visible situation targets before the posture of the camera changes, and determining the situation targets moving into and out of the viewport of the camera.
5. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 4, wherein the step 2 comprises:
step 2-1, acquiring a maximized visible longitude and latitude range according to the current posture of the camera;
step 2-2, judging whether the regular longitude and latitude grids are within the maximized visible longitude and latitude range one by one, and judging all situation targets contained in the regular longitude and latitude grids aiming at the regular longitude and latitude grids which are completely within the maximized visible longitude and latitude range and partially within the maximized visible longitude and latitude range;
step 2-3, judging whether the situation targets are in a view cone of a camera one by one, if so, determining that the situation targets are visible situation targets, and adding the visible situation targets into a target set of visible situation targets;
and 2-4, comparing the target set of the visible situation targets with the target set before the posture of the camera is changed, and acquiring the situation targets newly moved into the viewport and the situation targets moved out of the viewport.
6. The method for real-time rendering of the million-magnitude three-dimensional situation target according to claim 5, wherein the step 2-1 comprises:
step 2-1-1, calculating direction vectors of four edges of the view cone body by utilizing the Cartesian coordinates, the vertical field angle, the view port width-height ratio and the sight line direction of the camera;
step 2-1-2, calculating four direction vectors of a view port, namely a right upper direction vector, a right lower direction vector, a right left direction vector and a right direction vector according to the directions of the four edges;
step 2-1-3, intersecting the 8 directions obtained in the step 2-1-1 and the step 2-1-2 with the three-dimensional earth by using a cosine law to obtain 8 intersection points, wherein if a certain direction does not intersect with the three-dimensional earth, a vector corresponding to the direction is rotated around a camera point to a sight line direction, so that the vector after rotation adjustment is tangent to the earth, and the tangent point is used as the intersection point;
2-1-4, converting the acquired Cartesian coordinates of the 8 intersection points into longitude and latitude coordinates, and solving the maximum value and the minimum value of the longitude and the latitude, wherein a rectangular range enclosed by the maximum value and the minimum value of the longitude and the latitude is a maximized visible longitude and latitude range;
step 2-1-5, if the north pole or the south pole is visible in the maximized visible longitude and latitude range, correcting the maximized visible longitude and latitude range, and correcting the longitude range of the maximized visible longitude and latitude range to be-180 degrees to 180 degrees;
if the north pole is visible in the maximized visible longitude and latitude range, correcting the latitude range of the maximized visible longitude and latitude range into a minimum latitude value to 90 degrees; and if the south pole is visible in the maximized visible latitude and longitude range, modifying the latitude range of the maximized visible latitude and longitude range to be-90 degrees to the maximum latitude value.
7. The method as claimed in claim 6, wherein the situation information of the situation target in step 3 includes batch number, speed and heading, and different situation information of each situation target is expressed by different graphic objects respectively.
8. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 1, wherein the step 4 comprises: merging a plurality of graphic objects in each object pool into the same batch by using an instant technology and rendering; if the drawing quantity of the primitives in a certain batch reaches the upper limit, a new batch of loading graphic objects is created, and meanwhile, complex triangular surface construction, horizontal line clipping calculation and normal vector calculation are transplanted into a graphic processor by utilizing a graphic processor programming technology, and finally the three-dimensional situation of the target is drawn on a screen.
9. The method for real-time rendering of the million-scale three-dimensional situation target according to claim 1, wherein the step 4 comprises:
step 4-1, merging all the graphic objects to be rendered in each object pool into the same batch;
step 4-2, judging whether the position of the graphic object to be rendered is shielded by the earth in a vertex shader, namely, leading a tangent line from the position of a camera to the earth, calculating the distance from the camera to the tangent point to be the farthest visible distance, and if the distance from the camera to the graphic object is smaller than the farthest visible distance, determining that the position of the graphic object is not shielded by the earth;
4-3, performing triangular patch expansion in a geometric shader aiming at icon graphs and label graphs in the graphic objects to form a rectangle which is composed of 2 triangles and has a specified pixel width and a specified pixel height; calculating a normal vector of a triangular surface in a geometric shader for illumination calculation aiming at a model graph in the graph object;
and 4-4, multiplying the vertex of each graphic object by a world matrix and a viewport projection matrix to obtain a final rendered screen coordinate, and coloring the pixel in a pixel shader according to the texture to obtain the three-dimensional situation drawing result.
CN202010933917.2A 2020-09-08 2020-09-08 Real-time drawing method of million-magnitude three-dimensional situation target Active CN112184864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010933917.2A CN112184864B (en) 2020-09-08 2020-09-08 Real-time drawing method of million-magnitude three-dimensional situation target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010933917.2A CN112184864B (en) 2020-09-08 2020-09-08 Real-time drawing method of million-magnitude three-dimensional situation target

Publications (2)

Publication Number Publication Date
CN112184864A CN112184864A (en) 2021-01-05
CN112184864B true CN112184864B (en) 2022-09-13

Family

ID=73924923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010933917.2A Active CN112184864B (en) 2020-09-08 2020-09-08 Real-time drawing method of million-magnitude three-dimensional situation target

Country Status (1)

Country Link
CN (1) CN112184864B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888704B (en) * 2021-12-01 2022-03-04 中国电子科技集团公司第二十八研究所 Low-delay interaction-oriented micro scene hierarchical time-sharing drawing optimization method
CN116129092A (en) * 2023-04-18 2023-05-16 中科星图测控技术股份有限公司 Massive space target visualization system and method based on WebGL technology
CN116664789B (en) * 2023-07-24 2023-10-24 齐鲁空天信息研究院 Global ionosphere grid data rapid visualization method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105336003A (en) * 2015-09-28 2016-02-17 中国人民解放军空军航空大学 Three-dimensional terrain model real-time smooth drawing method with combination of GPU technology
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8325177B2 (en) * 2008-12-29 2012-12-04 Microsoft Corporation Leveraging graphics processors to optimize rendering 2-D objects
US20160155261A1 (en) * 2014-11-26 2016-06-02 Bevelity LLC Rendering and Lightmap Calculation Methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105336003A (en) * 2015-09-28 2016-02-17 中国人民解放军空军航空大学 Three-dimensional terrain model real-time smooth drawing method with combination of GPU technology
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion

Also Published As

Publication number Publication date
CN112184864A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112184864B (en) Real-time drawing method of million-magnitude three-dimensional situation target
US11270506B2 (en) Foveated geometry tessellation
CN110738721B (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
CN110706324B (en) Method and device for rendering weather particles
US6222555B1 (en) Method for automatically smoothing object level of detail transitions for regular objects in a computer graphics display system
US8743114B2 (en) Methods and systems to determine conservative view cell occlusion
CN112270756A (en) Data rendering method applied to BIM model file
US20040075654A1 (en) 3-D digital image processor and method for visibility processing for use in the same
CN114820906B (en) Image rendering method and device, electronic equipment and storage medium
CN107220372B (en) A kind of automatic laying method of three-dimensional map line feature annotation
CN112070909B (en) Engineering three-dimensional model LOD output method based on 3D Tiles
CN110378992A (en) Towards large scene model web terminal dynamic rendering LOD processing method
CN108022202A (en) A kind of advanced blanking geometry engines structure
CN106683155A (en) Three-dimensional model comprehensive dynamic scheduling method
CN114065320A (en) LOD-based CAD graph lightweight rendering method
CN111179414B (en) Terrain LOD generation method
CN110502305B (en) Method and device for realizing dynamic interface and related equipment
CN111815775A (en) OpenGL-based three-dimensional background map rapid filling method
US7050053B2 (en) Geometric folding for cone-tree data compression
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
CN115228083A (en) Resource rendering method and device
CN115202483A (en) Method for eliminating global three-dimensional map system jitter
CN114399421A (en) Storage method, device and equipment for three-dimensional model visibility data and storage medium
Chang et al. Hierarchical simplification of city models to maintain urban legibility.
Masood et al. A novel method for adaptive terrain rendering using memory-efficient tessellation codes for virtual globes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant