CN110838161B - Method for aggregating large-batch graph nodes of OSG in three-dimensional scene - Google Patents

Method for aggregating large-batch graph nodes of OSG in three-dimensional scene Download PDF

Info

Publication number
CN110838161B
CN110838161B CN201911047745.2A CN201911047745A CN110838161B CN 110838161 B CN110838161 B CN 110838161B CN 201911047745 A CN201911047745 A CN 201911047745A CN 110838161 B CN110838161 B CN 110838161B
Authority
CN
China
Prior art keywords
aggregation
data structure
traversing
nodes
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911047745.2A
Other languages
Chinese (zh)
Other versions
CN110838161A (en
Inventor
王茂元
何振
甘双喜
关童
张旭
高润民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Hengge Digital Technology Co ltd
Original Assignee
Xi'an Hengge Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Hengge Digital Technology Co ltd filed Critical Xi'an Hengge Digital Technology Co ltd
Priority to CN201911047745.2A priority Critical patent/CN110838161B/en
Publication of CN110838161A publication Critical patent/CN110838161A/en
Application granted granted Critical
Publication of CN110838161B publication Critical patent/CN110838161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/32Image data format

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A method for aggregation of OSG mass graph nodes in a three-dimensional scene, comprising the steps of: step 1, defining and declaring a data structure type; step 2, carrying out regional gridding on the three-dimensional projection coordinate values; step 3, dividing all nodes according to the projection coordinate positions of the graphic nodes, and calculating screen coordinates of the corresponding graphic nodes; step 4, traversing the regional grids; step 5, repeating the step 4 until the nodes in each regional grid are in an aggregation state; step 6, calculating the position of the graph nodes in each aggregation set to obtain aggregation points; and 7, controlling whether the graphic nodes in the aggregation set are rendered or not and rendering the aggregation graphic nodes.

Description

Method for aggregating large-batch graph nodes of OSG in three-dimensional scene
Technical Field
The invention belongs to the technical field of geographic information systems, and particularly relates to a method for aggregating large-scale graph nodes of an OSG in a three-dimensional scene.
Background
In the geographic information system (abbreviated as GIS) industry, massive data is always a remarkable feature of geographic data, whether grid vector data or model business data can be easily achieved to the TB level, so that in order to visually and conveniently view the huge data, the three-dimensional visualization technology is rapidly developed, and the OpenSceneGraph (abbreviated as OSG) is widely and widely applied as an open source efficient three-dimensional engine. However, in the three-dimensional scene, as the number of the rendered graphic nodes increases, the rendering frame rate of the scene is reduced, and the scene is blocked or even dead after the number of the rendered graphic nodes reaches a certain threshold (specifically determined by the machine performance). Of course, in a state where the viewpoint is pulled away, an excessive number of graphics nodes are rendered. In this state, the rendering efficiency in the scene is extremely low, and a certain amount of graphic nodes belong to the superposition state, so that a stack of disordered display effects are seen by the human eyes, and the display effect is extremely poor.
In order to solve the problem that a large number of graphic nodes are displayed in a scene in a superposition manner, the method for improving the rendering efficiency is generally used in the former (improving the rendering efficiency: one is to control rendering efficiency using clipping. The name suggestion is to hide the graphic nodes not in the screen display area from rendering, for example, the graphic nodes on the back of the earth are hidden if the graphic nodes contain three-dimensional digital earth. Although this method can increase the efficiency of three-dimensional scene rendering under certain conditions, it has strong limitations. For example: the method can increase the efficiency of three-dimensional scene rendering under certain conditions, but has strong limitation, and if the number of the graphics displayed in the current screen after clipping is still within the threshold range, the rendering efficiency in the scene can be greatly reduced, even blocked. And secondly, using Levels of Detail (LOD for short), namely displaying or hiding the graphic nodes at different Levels, and simultaneously combining a view clipping technology to clip the graphic nodes outside the view. Both of these methods are applicable only in certain situations. However, the latter problem (graphic node superposition display) and the problem that the target graphics in some large-scale target display scenes must be always displayed and are displayed in an attractive manner are not applicable, so that a new scheme is needed to solve the problem.
Disclosure of Invention
The invention aims to provide a method for aggregating large-batch graph nodes of OSG in a three-dimensional scene so as to solve the problems.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a method for aggregation of OSG mass graph nodes in a three-dimensional scene, comprising the steps of:
step 1, defining and declaring a data structure type;
step 2, carrying out regional gridding on the three-dimensional projection coordinate values;
step 3, dividing all nodes according to the projection coordinate positions of the graphic nodes, and calculating screen coordinates of the corresponding graphic nodes;
step 4, traversing the area grid and traversing the rule: taking screen coordinates of non-aggregated graphic nodes in the current grid, traversing pixel values between the screen coordinates of all the graphic nodes in the current grid and adjacent grids and the reference point by taking the screen coordinates of the nodes as a reference, judging whether aggregation is carried out according to the pixel values between the two points and the set pixel range values, adding the aggregation into an aggregation set, and setting the nodes to be in an aggregation state;
step 5, repeating the step 4 until the nodes in each regional grid are in an aggregation state;
step 6, calculating the position of the graph nodes in each aggregation set to obtain aggregation points;
and 7, controlling whether the graphic nodes in the aggregation set are rendered or not and rendering the aggregation graphic nodes.
Further, the step 1 specifically includes:
A. defining a graphic node data structure type, including a graphic node unique number, projection coordinates, screen coordinates and an initial aggregation state, and initializing to be unpolymerized;
B. defining an aggregation node data structure type comprising an aggregation unique number, an aggregation center point, an aggregation node mapping table and an aggregation state;
C. defining a gridding area data structure type comprising an area index number and an original graph node data structure body set;
D. declaring a container which takes the data structure type of the aggregation node as an object, namely, an aggregation data set for short;
E. declaring a container which takes the data structure type of the graph node as an object, namely a graph original data set for short;
F. a container for grid region data structure type is declared, which is called grid region data set for short.
Further, step 2 is specifically to initialize the length of the grid area dataset according to the set number of rows and columns, where the length of the dataset is the product of the number of rows and the number of columns.
Further, the step 3 specifically includes:
A. acquiring all node information from graph node management;
B. traversing the image node information one by one, extracting the image unique number and the position information, and converting the position information of the image node into projection coordinates and screen coordinates;
C. judging whether the screen coordinates are in the current screen display range, if so, creating a graphic node data structure, and updating the extracted unique number, the calculated projection coordinates and the screen coordinate values into the data structure;
D. judging whether x and y values in the projection coordinates are in the range of the projection coordinates [ -1,1], and if so, calculating a row and column index value by the set row and column number:
column index value= (projection coordinate x value+1.0)/2.0 total column number;
line number index value= (projection coordinate y value+1.0)/2.0 total line number;
if the calculated column number index value is larger than or equal to the total column number, the column number index value is equal to the total column number-1, otherwise, the calculated column number index value is taken; if the calculated line number index value is greater than or equal to the total line number, the line number index value is equal to the total line number-1, otherwise, the calculated line number index value is taken. And then calculating the interval index value according to the following calculation mode:
interval index value = row index value x total column number + column number index value;
E. accessing the corresponding gridding area data structure according to the interval index value, and adding the newly built graphic node data structure into the original graphic node data structure body set in the gridding area data structure.
Further, the step 4 specifically includes: traversing the updated grid region data sets one by one, calculating the subscripts of the surrounding adjacent data sets according to the subscripts of the current data sets, and generating a subscript set;
A. taking out the original graph node data structure body set in the gridding region data structure according to the current subscript, traversing the set one by one,
B. judging an aggregation state in the graph node data structure when traversing elements in a set, if the aggregation state is not aggregated, creating an aggregation node data structure, taking screen coordinates in the current graph node data structure as reference aggregation points, traversing all elements in a grid area according to the reference aggregation points, calculating pixel square differences of the reference aggregation points and the screen coordinate points in the current traversing data structure in the traversing process, comparing the square difference value with a set aggregation pixel square, if the current square difference value is smaller than the set aggregation pixel square, modifying the aggregation state in the current traversing data structure into aggregation in a reference aggregation point aggregation range, extracting unique numbers and screen coordinates in the current traversing data structure, taking the unique numbers as key values, adding the screen coordinates as values into an aggregation node mapping table in the current aggregation node data structure, continuing traversing the current grid area, traversing adjacent surrounding data sets according to the reference aggregation points successively after traversing, and continuing traversing the current data set, and acquiring the original graph node data structure of the surrounding structure when traversing the current data set;
C. the A, B steps are repeated until all the grid area traversals are completed.
Further, the step 5 specifically includes: traversing the aggregation data set obtained in the step 4 again, acquiring an aggregation node mapping table in an aggregation node data structure in the aggregation data set one by one, traversing the mapping table, calculating an aggregation center point by using a method of solving particles by using regional points, and updating an aggregation center point value in the current aggregation node data structure.
Further, the step 6 specifically includes: traversing the processed aggregation data to obtain an aggregation center point thereof, converting the aggregation center point into scene coordinates, and rendering the nodes into the scene.
Compared with the prior art, the invention has the following technical effects:
1. the limitation of improving the rendering efficiency of the three-dimensional scene by utilizing the clipping is broken, a large number of targets are loaded when the three-dimensional view point is zoomed out by utilizing the clipping in the traditional method, and the rendering frame rate is 8.56 frames when the three-dimensional middle view point is zoomed out to see the whole earth. When the three-dimensional rendering frame rate is 21.18 frames when the three-dimensional rendering frame rate is zoomed in to a certain height. Although there is some improvement in the rendering frame rate, it is not possible to improve the rendering frame rate to a large extent. According to the invention, a large number of targets are loaded, the rendering frame rate of the three-dimensional middle view point is 60.49 frames when the whole earth is seen by zooming in, and the rendering frame rate of the three-dimensional middle view point is 60.23 frames when the whole earth is zoomed in to a certain height. Whether zooming in or out is almost in full frame rendering.
2. The problem of superposition display of graph nodes is solved. When a large number of targets are displayed in a scene, the graph nodes are seriously overlapped, and the method can directly solve the problem, so that the scene is displayed in an attractive manner.
3. And when a large number of graphic nodes are loaded, the rendering efficiency is improved.
Drawings
FIG. 1 is a schematic diagram before polymerization;
FIG. 2 is a schematic diagram before polymerization;
FIG. 3 is an enlarged schematic view of area A of FIG. 2;
FIG. 4 is a schematic diagram of the polymerization;
FIG. 5 is a schematic diagram of the polymerization;
FIG. 6 is a schematic diagram of the polymerization;
FIG. 7 is a flow chart of the method of the present invention;
fig. 8 is an example display diagram.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
referring to fig. 1, a method for aggregating OSG mass graph nodes in a three-dimensional scene includes the following steps:
step 1, defining and declaring a data structure type;
step 2, carrying out regional gridding on the three-dimensional projection coordinate values;
step 3, dividing all nodes according to the projection coordinate positions of the graphic nodes, and calculating screen coordinates of the corresponding graphic nodes;
step 4, traversing the area grid and traversing the rule: taking screen coordinates of non-aggregated graphic nodes in the current grid, traversing pixel values between the screen coordinates of all the graphic nodes in the current grid and adjacent grids and the reference point by taking the screen coordinates of the nodes as a reference, judging whether aggregation is carried out according to the pixel values between the two points and the set pixel range values, adding the aggregation into an aggregation set, and setting the nodes to be in an aggregation state;
step 5, repeating the step 4 until the nodes in each regional grid are in an aggregation state;
step 6, calculating the position of the graph nodes in each aggregation set to obtain aggregation points;
and 7, controlling whether the graphic nodes in the aggregation set are rendered or not and rendering the aggregation graphic nodes.
The step 1 specifically comprises the following steps:
A. defining a graphic node data structure type, including a graphic node unique number, projection coordinates, screen coordinates and an initial aggregation state, and initializing to be unpolymerized;
B. defining an aggregation node data structure type comprising an aggregation unique number, an aggregation center point, an aggregation node mapping table and an aggregation state;
C. defining a gridding area data structure type comprising an area index number and an original graph node data structure body set;
D. declaring a container which takes the data structure type of the aggregation node as an object, namely, an aggregation data set for short;
E. declaring a container which takes the data structure type of the graph node as an object, namely a graph original data set for short;
F. a container for grid region data structure type is declared, which is called grid region data set for short.
Step 2 is to initialize the length of the grid area data set according to the set row and column number, wherein the length of the data set is the product of the number of rows and the number of columns.
The step 3 is specifically as follows:
A. acquiring all node information from graph node management;
B. traversing the image node information one by one, extracting the image unique number and the position information, and converting the position information of the image node into projection coordinates and screen coordinates;
C. judging whether the screen coordinates are in the current screen display range, if so, creating a graphic node data structure, and updating the extracted unique number, the calculated projection coordinates and the screen coordinate values into the data structure;
D. judging whether x and y values in the projection coordinates are in the range of the projection coordinates [ -1,1], and if so, calculating a row and column index value by the set row and column number:
column index value= (projection coordinate x value+1.0)/2.0 total column number;
line number index value= (projection coordinate y value+1.0)/2.0 total line number;
if the calculated column number index value is larger than or equal to the total column number, the column number index value is equal to the total column number-1, otherwise, the calculated column number index value is taken; if the calculated line number index value is greater than or equal to the total line number, the line number index value is equal to the total line number-1, otherwise, the calculated line number index value is taken. And then calculating the interval index value according to the following calculation mode:
interval index value = row index value x total column number + column number index value;
E. accessing the corresponding gridding area data structure according to the interval index value, and adding the newly built graphic node data structure into the original graphic node data structure body set in the gridding area data structure.
The step 4 is specifically as follows: traversing the updated grid region data sets one by one, calculating the subscripts of the surrounding adjacent data sets according to the subscripts of the current data sets, and generating a subscript set;
A. taking out the original graph node data structure body set in the gridding region data structure according to the current subscript, traversing the set one by one,
B. judging an aggregation state in the graph node data structure when traversing elements in a set, if the aggregation state is not aggregated, creating an aggregation node data structure, taking screen coordinates in the current graph node data structure as reference aggregation points, traversing all elements in a grid area according to the reference aggregation points, calculating pixel square differences of the reference aggregation points and the screen coordinate points in the current traversing data structure in the traversing process, comparing the square difference value with a set aggregation pixel square, if the current square difference value is smaller than the set aggregation pixel square, modifying the aggregation state in the current traversing data structure into aggregation in a reference aggregation point aggregation range, extracting unique numbers and screen coordinates in the current traversing data structure, taking the unique numbers as key values, adding the screen coordinates as values into an aggregation node mapping table in the current aggregation node data structure, continuing traversing the current grid area, traversing adjacent surrounding data sets according to the reference aggregation points successively after traversing, and continuing traversing the current data set, and acquiring the original graph node data structure of the surrounding structure when traversing the current data set;
C. the A, B steps are repeated until all the grid area traversals are completed.
The step 5 is specifically as follows: traversing the aggregation data set obtained in the step 4 again, acquiring an aggregation node mapping table in an aggregation node data structure in the aggregation data set one by one, traversing the mapping table, calculating an aggregation center point by using a method of solving particles by using regional points, and updating an aggregation center point value in the current aggregation node data structure.
The step 6 is specifically as follows: traversing the processed aggregation data to obtain an aggregation center point thereof, converting the aggregation center point into scene coordinates, and rendering the nodes into the scene.
As shown in fig. 1, it can be seen that many nodes are in a superposition state, and an area a shown in fig. 2 is selected, and is enlarged to fig. 3, and the area is gridded. Obtaining fig. 4, setting an aggregation pixel R, assuming point a is an aggregation reference point, traversing pixel values between screen coordinates of all graph nodes in the current grid and adjacent grids and the reference point, comparing the pixel values between the two points with the set pixel range values, judging whether to aggregate, calculating a series of aggregation sets with point a around B, C, D and the like, and similarly obtaining all aggregation sets in other grid areas, and calculating aggregation set particles to obtain all aggregation points, wherein the effect after aggregation is shown in fig. 6. As is evident from the comparison of FIG. 1 before aggregation and FIG. 7 after aggregation, the effect in FIG. 6 is more attractive, and the number of rendered graphic nodes is reduced.
1) In the same batch of graph nodes, the effect is shown in the following graph in the aggregation with and without aggregation. The figure shows that the aggregated data lightens the number of rendering nodes in the scene to a certain extent, plays a role in rendering and lightening, has better display effect and does not have the congestion state which is originally rendered. The node renders as shown in fig. 8 after aggregation, as well as when its original data renders effect is shown.
2) On the same machine, 30000 batches of graphic nodes are rendered successively by using a scheme aggregated by a conventional non-aggregation mode, and the rendering time is as follows under the condition of the largest graphic node rendered in a screen:
rendering mode Scene rendering frame rate (average)
Conventional method 8.5
Polymerization 50
FreeEarth is a multi-industry freely expandable secondary development GIS platform developed based on OpenSceneGraph, OSGEarth, and the platform is used for optimizing and modifying large-batch graphic node rendering, can realize batch and light-weight display, can gradually replace an LOD method, and is remarkably improved in rendering efficiency and display.

Claims (5)

1. A method for aggregating OSG mass graph nodes in a three-dimensional scene, comprising the steps of:
step 1, defining and declaring a data structure type;
step 2, carrying out regional gridding on the three-dimensional projection coordinate values;
step 3, dividing all nodes according to the projection coordinate positions of the graphic nodes, and calculating screen coordinates of the corresponding graphic nodes;
step 4, traversing the area grid and traversing the rule: taking screen coordinates of non-aggregated graphic nodes in the current grid, traversing pixel values between the screen coordinates of all the graphic nodes in the current grid and adjacent grids and the reference point by taking the screen coordinates of the nodes as a reference, judging whether aggregation is carried out according to the pixel values between the two points and the set pixel range values, adding the aggregation into an aggregation set, and setting the nodes to be in an aggregation state;
step 5, repeating the step 4 until the nodes in each regional grid are in an aggregation state;
step 6, calculating the position of the graph nodes in each aggregation set to obtain aggregation points;
step 7, controlling whether the aggregation graphic nodes are rendered or not and rendering the aggregation graphic nodes;
the step 1 specifically comprises the following steps:
A. defining a graphic node data structure type, including a graphic node unique number, projection coordinates, screen coordinates and an initial aggregation state, and initializing to be unpolymerized;
B. defining an aggregation node data structure type comprising an aggregation unique number, an aggregation center point, an aggregation node mapping table and an aggregation state;
C. defining a gridding area data structure type comprising an area index number and an original graph node data structure body set;
D. declaring a container which takes the data structure type of the aggregation node as an object, namely, an aggregation data set for short;
E. declaring a container which takes the data structure type of the graph node as an object, namely a graph original data set for short;
F. declaring a container which takes the type of the gridding area data structure as an object, namely a gridding area data set for short;
the step 4 is specifically as follows: traversing the updated grid region data sets one by one, calculating the subscripts of the surrounding adjacent data sets according to the subscripts of the current data sets, and generating a subscript set;
A. taking out the original graph node data structure body set in the gridding region data structure according to the current subscript, traversing the set one by one,
B. judging an aggregation state in the graph node data structure when traversing elements in a set, if the aggregation state is not aggregated, creating an aggregation node data structure, taking screen coordinates in the current graph node data structure as reference aggregation points, traversing all elements in a grid area according to the reference aggregation points, calculating pixel square differences of the reference aggregation points and the screen coordinate points in the current traversing data structure in the traversing process, comparing the square difference value with a set aggregation pixel square, if the current square difference value is smaller than the set aggregation pixel square, modifying the aggregation state in the current traversing data structure into aggregation in a reference aggregation point aggregation range, extracting unique numbers and screen coordinates in the current traversing data structure, taking the unique numbers as key values, adding the screen coordinates as values into an aggregation node mapping table in the current aggregation node data structure, continuing traversing the current grid area, traversing adjacent surrounding data sets according to the reference aggregation points successively after traversing, and continuing traversing the current data set, and acquiring the original graph node data structure of the surrounding structure when traversing the current data set;
C. the A, B steps are repeated until all the grid area traversals are completed.
2. The method for aggregating large-scale OSG graphic nodes in three-dimensional scene according to claim 1, wherein step 2 is specifically to initialize the length of the grid area dataset according to the set number of rows and columns, and the length of the dataset is the product of the number of rows and the number of columns.
3. The method for aggregating large-scale graph nodes in three-dimensional scene according to claim 1, wherein step 3 specifically comprises:
A. acquiring all node information from graph node management;
B. traversing the image node information one by one, extracting the image unique number and the position information, and converting the position information of the image node into projection coordinates and screen coordinates;
C. judging whether the screen coordinates are in the current screen display range, if so, creating a graphic node data structure, and updating the extracted unique number, the calculated projection coordinates and the screen coordinate values into the data structure;
D. judging whether x and y values in the projection coordinates are in the range of the projection coordinates [ -1,1], and if so, calculating a row and column index value by the set row and column number:
column index value= (projection coordinate x value+1.0)/2.0 total column number;
line number index value= (projection coordinate y value+1.0)/2.0 total line number;
if the calculated column number index value is larger than or equal to the total column number, the column number index value is equal to the total column number-1, otherwise, the calculated column number index value is taken; if the calculated line number index value is greater than or equal to the total line number, the line number index value is equal to the total line number-1, otherwise, the calculated line number index value is taken; and then calculating the interval index value according to the following calculation mode:
interval index value = row index value x total column number + column number index value;
E. accessing the corresponding gridding area data structure according to the interval index value, and adding the newly built graphic node data structure into the original graphic node data structure body set in the gridding area data structure.
4. The method for aggregating large-scale graph nodes in three-dimensional scene according to claim 1, wherein step 5 specifically comprises: traversing the aggregation data set obtained in the step 4 again, acquiring an aggregation node mapping table in an aggregation node data structure in the aggregation data set one by one, traversing the mapping table, calculating an aggregation center point by using a method of solving particles by using regional points, and updating an aggregation center point value in the current aggregation node data structure.
5. The method for aggregating large-scale graph nodes in three-dimensional scene according to claim 1, wherein step 6 specifically comprises: traversing the processed aggregation data to obtain an aggregation center point thereof, converting the aggregation center point into scene coordinates, and rendering the nodes into the scene.
CN201911047745.2A 2019-10-30 2019-10-30 Method for aggregating large-batch graph nodes of OSG in three-dimensional scene Active CN110838161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911047745.2A CN110838161B (en) 2019-10-30 2019-10-30 Method for aggregating large-batch graph nodes of OSG in three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911047745.2A CN110838161B (en) 2019-10-30 2019-10-30 Method for aggregating large-batch graph nodes of OSG in three-dimensional scene

Publications (2)

Publication Number Publication Date
CN110838161A CN110838161A (en) 2020-02-25
CN110838161B true CN110838161B (en) 2024-01-30

Family

ID=69576176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911047745.2A Active CN110838161B (en) 2019-10-30 2019-10-30 Method for aggregating large-batch graph nodes of OSG in three-dimensional scene

Country Status (1)

Country Link
CN (1) CN110838161B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345068B (en) * 2021-06-10 2023-12-05 西安恒歌数码科技有限责任公司 Method and system for drawing war camouflage based on osgEarth

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998043208A2 (en) * 1997-03-21 1998-10-01 Newfire, Inc. Method and apparatus for graphics processing
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
US8166042B1 (en) * 2008-04-14 2012-04-24 Google Inc. Height based indexing
CN104599315A (en) * 2014-12-09 2015-05-06 深圳市腾讯计算机系统有限公司 Three-dimensional scene construction method and system
CN107564087A (en) * 2017-09-11 2018-01-09 南京大学 A kind of Three-D linear symbol rendering intent based on screen
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928643B2 (en) * 2015-09-28 2018-03-27 Douglas Rogers Hierarchical continuous level of detail for three-dimensional meshes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998043208A2 (en) * 1997-03-21 1998-10-01 Newfire, Inc. Method and apparatus for graphics processing
US8166042B1 (en) * 2008-04-14 2012-04-24 Google Inc. Height based indexing
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN104599315A (en) * 2014-12-09 2015-05-06 深圳市腾讯计算机系统有限公司 Three-dimensional scene construction method and system
CN107564087A (en) * 2017-09-11 2018-01-09 南京大学 A kind of Three-D linear symbol rendering intent based on screen
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘畅 ; 殷浩 ; 张叶廷 ; 谢潇 ; 曹振宇 ; .基于WebGL的三维管线轻量可视化方法.地理信息世界.2018,(04),全文. *
曾俊,陈天泽,匡纲要.一种基于二叉树结构的大规模地形实时渲染方法.计算机仿真.2004,(11),全文. *

Also Published As

Publication number Publication date
CN110838161A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
US9959670B2 (en) Method for rendering terrain
CN110084248B (en) ORB feature homogenization extraction method
CN111260784B (en) Urban three-dimensional space grid compression coding method and device and terminal equipment
CN103679627A (en) Tile based computer graphics
CN104143186B (en) A kind of SLIC super-pixel piecemeal optimization method
CN112419498B (en) Scheduling rendering method for massive oblique photographic data
US9576381B2 (en) Method and device for simplifying space data
US20230298252A1 (en) Image rendering method and related apparatus
CN110838161B (en) Method for aggregating large-batch graph nodes of OSG in three-dimensional scene
CN104851133A (en) Image self-adaptive grid generation variational method
CN114373073A (en) Method and system for road scene semantic segmentation
CN110264483B (en) Semantic image segmentation method based on deep learning
CN109636889A (en) A kind of Large Scale Terrain model rendering method based on dynamic suture zone
CN112002021A (en) Aggregation dotting visualization method and device based on unity3d
CN109785443B (en) Three-dimensional model simplification method for large ocean engineering equipment
CN102542528B (en) Image conversion processing method and system
CN109712227B (en) Voxel terrain management method
US20140347373A1 (en) Method of generating terrain model and device using the same
CN104331883A (en) Image boundary extraction method based on non-symmetry and anti-packing model
CN114373048A (en) Three-dimensional model scene rendering method
CN117237503B (en) Geographic element data accelerated rendering and device
CN107392935A (en) A kind of particle computational methods and particIe system based on integral formula
CN103886635A (en) Self-adaption LOD model establishing method based on face clustering
CN105354206B (en) Vector data tile based on big figure cutting cuts out figure accelerated method
CN112435322B (en) Rendering method, device and equipment of 3D model and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant