CN113256784A - Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU - Google Patents

Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU Download PDF

Info

Publication number
CN113256784A
CN113256784A CN202110747224.9A CN202110747224A CN113256784A CN 113256784 A CN113256784 A CN 113256784A CN 202110747224 A CN202110747224 A CN 202110747224A CN 113256784 A CN113256784 A CN 113256784A
Authority
CN
China
Prior art keywords
voxel
data
sampling
cube
gpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110747224.9A
Other languages
Chinese (zh)
Other versions
CN113256784B (en
Inventor
任康成
池晶
白文博
冯德润
沈雷
沈文斐
武永波
余磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Geospace Information Technology Co ltd
Original Assignee
Wuda Geoinformatics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuda Geoinformatics Co ltd filed Critical Wuda Geoinformatics Co ltd
Priority to CN202110747224.9A priority Critical patent/CN113256784B/en
Publication of CN113256784A publication Critical patent/CN113256784A/en
Application granted granted Critical
Publication of CN113256784B publication Critical patent/CN113256784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The invention provides a method for performing ultra-efficient drawing of GIS space three-dimensional voxel data based on a GPU, which comprises the following steps: creating a voxel cube with the same numerical side length as the size of a real field data range to be expressed by the voxel, and sampling the voxel by using a GPU to obtain sampling data in the real field data; creating a front frame cache, drawing the front of the voxel cube by using the sampling data, and storing front depth data into the front frame cache by using an unconventional method; creating a back frame cache, drawing the back of the voxel cube by using the sampling data, and storing back depth data into the back frame cache by using an unconventional method; and creating a post-processing process of voxel rendering, acquiring the depth data stored in the front frame buffer and the back frame buffer in the fragment shader, and performing stepping sampling along the current sight line to finish the drawing and display of the voxel cube. The invention has the beneficial effects that: the problem of sheltering from in the voxel cube drawing process is solved, and drawing efficiency is improved.

Description

Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU
Technical Field
The invention relates to the field of three-dimensional volume data processing, in particular to a method for performing ultra-efficient drawing of GIS space three-dimensional voxel data based on a GPU.
Background
In recent years, more and more GIS vendors use three-dimensional voxel layers to represent multi-dimensional spatial and temporal information. For example: the ArcGesPro visualizes the atmosphere or ocean data, the underground geological model or the space-time cube as a three-dimensional voxel map layer, and shows a more intuitive and accurate analysis result in application.
The current three-dimensional voxel rendering techniques mainly include Ray Casting (Ray Casting), Shear-warping (Shear-Warp), Maximum Intensity Projection (Maximum Intensity Projection), snowballing (Splatting), and object space scanning, among which the Ray Casting is the most important and common.
The ray casting algorithm is a direct volume rendering algorithm based on a sequence of images. The basic idea of this algorithm is: emitting a light ray from each pixel of the image along a fixed direction (usually a sight line direction), wherein the light ray penetrates through the whole image sequence, sampling the image sequence in the process to obtain color information, accumulating the color information until the light ray penetrates through the whole image sequence, and finally obtaining a rendered color.
The traditional ray projection algorithm based on the GPU is realized by the following steps:
first, a voxel cube with a side length of 1 is created, by means of which voxel sampling is performed using a GPU in a shear space coordinate system.
Secondly, starting depth detection, respectively drawing the front and back of the voxel cube by using a frame caching technology in a shearing space (when the back is drawn, a forward face rejection function needs to be started), and caching the front and back depth data.
And directly transmitting the front depth data and the back depth data into a fragment shader by using a direct rendering method, sampling voxel data for multiple times from the front depth position in a certain step length along the sight line direction in the fragment shader until the back depth position is stopped, drawing a voxel cube, and transmitting the voxel cube to display.
This method actually performs rendering three times. However, the first two times of drawing are fast, and the third time of drawing does not need to judge whether each point on the light ray is in the cuboid, so the operation speed is fast. When the volume of voxel data is large (the voxel information in GIS application) and the sampling step length is small, the drawing speed is obviously reduced.
Disclosure of Invention
The invention provides a method for performing ultra-efficient drawing of GIS space three-dimensional voxel data based on a GPU, which mainly solves the following technical problems:
(1) the problem of incorrect display results when the front face of the voxel cube is completely (or partially) behind the viewpoint is solved.
(2) The problem that when the back face of the voxel cube is completely (or partially) shielded by other objects in a scene, the display result is incorrect is solved.
(3) The problem of slow drawing efficiency when the voxel data volume is large and the sampling step length is small is solved.
A method for performing ultra-efficient drawing of GIS space three-dimensional voxel data based on a GPU comprises the following steps:
s101: creating a voxel cube with the same numerical side length as the size of a real field data range to be expressed by the voxel, and sampling the voxel by using a GPU to obtain sampling data in the real field data;
s102: creating a front frame cache, drawing the front of the voxel cube by using the sampling data, and storing front depth data into the front frame cache by using an unconventional method;
s103: creating a back frame cache, drawing the back of the voxel cube by using the sampling data, and storing back depth data into the back frame cache by using an unconventional method;
s104: and creating a post-processing process of voxel rendering, acquiring the depth data stored in the front frame buffer and the back frame buffer in the fragment shader, and performing stepping sampling along the current sight line to finish the drawing and display of the voxel cube.
Further, the sampling data in step S101 includes: actual scene depth data, front depth data of the voxel cube, and back depth data of the voxel cube.
Further, in step S102 and step S103, the depth detection function is turned off when the front frame buffer and the back frame buffer are created and used.
Further, the front depth data and the back depth data in the steps S102 to S103 have a value range of any value between plus or minus 50000 meters.
In step S104, when sampling along the current sight line, it is further determined whether to finish rendering in advance according to the actual scene depth value.
The beneficial effects provided by the invention are as follows:
(1) the problem of rendering result errors in the case where the front face of the voxel cube is completely (or partially) behind the viewpoint is corrected.
(2) The problem of drawing result error in the case that the back surface of the voxel cube is completely (or partially) shielded by other objects in the scene is corrected.
(3) The rendering efficiency under the condition that the voxel data volume is large and the sampling step length is small is improved.
Drawings
FIG. 1 is a flow chart of a method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to the present invention;
FIG. 2 is a schematic diagram of a conventional GPU-based ray casting method for creating a voxel cube;
FIG. 3 is a case where the front face of the voxel cube is fully (or partially) behind the viewpoint;
FIG. 4 is a case where the back of the voxel cube is completely (or partially) occluded by other objects in the scene;
FIG. 5 is a schematic illustration of depth data recorded by a conventional method;
fig. 6 is a schematic diagram of a process of rendering a voxel cube.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, a method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU includes the following steps:
s101: creating a voxel cube with the same numerical side length as the size of a real field data range to be expressed by the voxel, and performing voxel sampling by using a GPU to obtain sampling data in the real field data;
in step S101, a process of voxel sampling is performed by using a GPU, based on a world coordinate system.
The sampling data in step S101 includes: actual scene depth data, voxel cube front depth data, and voxel cube back depth data. In the actual sampling process, sampling data further includes: data to be sampled of real field data to be expressed, the part of data including: weather quality, wind power, illumination intensity and the like, but has no strong relevance with the core of the method.
To better explain the differences between the present application and the conventional method, the present invention first explains the specific process of the conventional method in creating a voxel cube:
referring to fig. 2, fig. 2 is a schematic diagram illustrating a conventional GPU-based ray casting method for creating a voxel cube.
The cube in fig. 2, namely a voxel cube created by the conventional method, has a side length of 1, which is completely contained in the clipping space coordinate system, namely between the near plane and the far plane in the drawing, and the viewpoint is located in front of the near plane; in this case, which is shown in fig. 2, the voxel cube can be completely mapped.
However, the situation as in fig. 3 and 4 may occur if the voxel cube cannot be completely contained in the shear space, i.e. when only a part of the voxel cube is contained in the shear space.
Referring to fig. 3, fig. 3 is a case where the front face of the voxel cube is completely (or partially) behind the viewpoint (truncated by the near plane).
The upper half of fig. 3 is the processing result of the prior art, and in the present case, the front part of the leftmost corner of the voxel cube is clipped by exceeding the near plane, while the back of the voxel is culled by the graphics pipeline, so that no slice of the voxel cube is rasterized by the near plane clipping part. This area results in a triangular hole (right side of fig. 3), which, if the conventional method is still used, results in incorrect final drawing results.
Referring to fig. 4, fig. 4 is a case where the back of the voxel cube is completely (or partially) behind the viewpoint (truncated by the far plane).
The upper part of fig. 4 is the result of a prior art conventional processing, in the present case, with a portion of the voxel cube extending into the interior of the non-transparent object. Because the voxel cube is independently drawn, in the drawing process of the traditional method, it is unknown that a part of the sampling line segment is invalid (the sampling line segment enters into the non-transparent physical interior, and the view is blocked), so that the sampling can continue until the sampling is stopped by touching the back of the voxel cube, and redundant sampling is increased, thereby causing the final drawing result to be wrong.
Two features of the conventional method are summarized here: (1) based on the sampling of a shearing space coordinate system, the side length of a voxel cube can only be 1; (2) in the sampling process, other model data in the actual scene cannot be obtained in real time, and the problem of shielding in the process of drawing the voxel cube cannot be solved.
In the invention, the creation of the voxel cube has two corresponding characteristics: (1) sampling based on a world space coordinate system, wherein the side length of a voxel cube is the same as the size value of a real field data range to be expressed; (2) in the sampling process, other model (such as obstruction) data in the actual scene can be acquired in real time.
S102: creating a front frame cache, drawing the front of the voxel cube by using the sampling data, and storing front depth data into the front frame cache by using an unconventional method;
s103: creating a back frame cache, drawing the back of the voxel cube by using the sampling data, and storing back depth data into the back frame cache by using an unconventional method;
also, to better explain the differences between the present application and the conventional method, the present invention explains the voxel cube front and back rendering process as follows:
referring to fig. 5, fig. 5 is a schematic diagram of depth data recorded by a conventional method. In the conventional method, since the voxel cube is rendered as a unit cube (with a side length of 1), which just corresponds to the clipping space, the depth data thereof never exceeds the clipping space, i.e., there is no depth value smaller than 0 or larger than 1.
On the basis of the voxel cube created in step S101 of the present invention, the side length is the range size of the real field data, the space is the world space coordinate system, and the data may exceed the cut space, so that it is necessary to use the unconventional depth calculation method for measurement and recording.
In the invention, the front depth data and the back depth data in the steps S102 to S103 have a value range of any value between plus or minus 50000 meters.
In addition, in the steps S102-S103, the depth detection function which needs to be started in the traditional method is closed, the front depth is drawn in the world space, and the depth value which exceeds the cutting area is stored in the corresponding frame buffer in an unconventional mode; therefore, the voxel cube of the invention can completely record the front and back depth information without being completely positioned in the current visible area.
S104: and creating a post-processing process of voxel rendering, acquiring the depth data stored in the front frame buffer and the back frame buffer in the fragment shader, and performing stepping sampling along the current sight line to finish the drawing and display of the voxel cube.
And when sampling along the current sight, judging whether to finish drawing in advance according to the actual scene depth value.
Also, the present invention emphasizes the difference from the conventional method in the drawing process:
in the traditional method, conventional front depth data and back depth data are input into a fragment shader, sampling is carried out in the fragment shader according to the two depth data, and a direct rendering method is utilized; since depth information of other models of a scene cannot be acquired, sampling cannot be stopped after an obstacle (a building or the like) is encountered, and an erroneous result is drawn.
Here, the concept of the supplemental direct rendering is as follows:
direct rendering: the three-dimensional rendering engine renders the objects in the current scene one by one until all the objects are rendered, which is called direct drawing. In the direct rendering process, the information of each object in the scene cannot be acquired from each other.
Referring to the lower half of fig. 3 and the lower half of fig. 4, the method of the present invention processes the results of both cases.
As shown in the lower half of fig. 3, compared with the existing result, the present invention adopts the new processes of "post-processing drawing" and "irregular depth caching on the front and back sides", so that the fragment of the cut part is still sampled along the visual line direction in the corresponding interior of the cut part to generate the corresponding voxel fragment pixel, thereby ensuring that the drawing generates the correct result.
As shown in the lower half of fig. 4, in the present invention, because a new process of "post-processing rendering" and "irregular depth caching on front and back sides" is adopted, when a voxel is rendered, whether a sampling point is blocked by an object can be calculated by means of the cached scene depth information, thereby ensuring that a correct result is generated by rendering.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a process of rendering a voxel cube.
In the GIS application in fig. 6, the distances between the near plane and the far plane of a general scene area are both above 5 km, a voxel cube used in the scene needs to cover the area, the size of the voxel cube is necessarily large, and meanwhile, the data accuracy of the real voxel information in the area is generally in units of meters.
The sampling of the existing ray casting algorithm starts from the front side of the voxel cube and ends from the back side of the voxel cube, corresponding to fig. 6, the ray casting sampling (starting from point P1 and ending at point P4) thereof, the corresponding fragment will be sampled 5000 times per frame (sampling step size is 1 meter, sampling distance is 5000 meters), and the rendering efficiency of each frame is reduced due to the influence.
In the application, a post-processing drawing process is adopted, and the actual scene depth value is combined to judge whether to finish drawing in advance, but in the non-traditional method, only the front and back depth values are adopted;
here, the definition of the post-processing rendering process is supplemented:
and (3) post-treatment: some custom rendering processes, each of which is called post-processing, follow directly after rendering.
According to the invention, because the new processes of post-processing drawing and irregular depth caching on the front and back sides are adopted, whether the current sampling point is shielded by an object can be judged in the sampling process by virtue of scene depth information, so that the sampling is ended in advance (starting at a point P1 and stopping at a point P2, and each frame is only subjected to 2000 times of sampling), and the drawing speed is greatly improved.
In addition, the voxel cube is at the bottom of the scene, protruding below the ground. So as shown in fig. 6, the voxels of this portion do not need to be sampled.
In summary, the maximum sampling times (starting from point P0 and ending at point P3) of each voxel of the voxel cube that needs to be sampled in the current scene are less than 5000 times (starting from point P0 and ending at point P4). The voxel drawing speed is greatly improved.
Therefore, under the condition of the same scene, voxel data volume and sampling times, the overall rendering efficiency of the scene is greatly improved compared with the traditional ray projection algorithm.
The innovation point of the invention is mainly the improvement of the process flow for drawing the voxel cube, and the novel processes of post-processing drawing and unconventional depth caching on the front and back sides are adopted.
The invention has the following beneficial effects:
(1) the problem of rendering result errors in the case where the front face of the voxel cube is completely (or partially) behind the viewpoint is corrected.
(2) The problem of "rendering results are erroneous when the back surface of the voxel cube is completely (or partially) occluded by an object" is corrected.
(3) The rendering efficiency under the condition that the voxel data volume is large and the sampling step length is small is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A method for performing ultra-efficient drawing of GIS space three-dimensional voxel data based on a GPU is characterized by comprising the following steps: the method comprises the following steps:
s101: creating a voxel cube with the same numerical side length as the size of a real field data range to be expressed by the voxel, and sampling the voxel by using a GPU to obtain sampling data in the real field data;
s102: creating a front frame cache, drawing the front of the voxel cube by using the sampling data, and storing front depth data into the front frame cache by using an unconventional method;
s103: creating a back frame cache, drawing the back of the voxel cube by using the sampling data, and storing back depth data into the back frame cache by using an unconventional method;
s104: and creating a post-processing process of voxel rendering, acquiring the depth data stored in the front frame buffer and the back frame buffer in the fragment shader, and performing stepping sampling along the current sight line to finish the drawing and display of the voxel cube.
2. The method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to claim 1, characterized in that: in step S101, a process of voxel sampling is performed by using a GPU, based on a world coordinate system.
3. The method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to claim 1, characterized in that: the sampling data in step S101 includes: depth data of an actual scene, front depth data of a voxel cube, and back depth data of the voxel cube.
4. The method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to claim 1, characterized in that: in both step S102 and step S103, the depth detection function is turned off when the front frame buffer and the back frame buffer are created and used.
5. The method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to claim 1, characterized in that: and S102-S103, wherein the value range of the front depth data and the back depth data is any value between plus or minus 50000 meters.
6. The method for performing ultra-efficient rendering of GIS space three-dimensional voxel data based on a GPU according to claim 1, characterized in that: in step S104, when sampling along the current sight line, it is further determined whether to finish rendering in advance according to the actual scene depth value.
CN202110747224.9A 2021-07-02 2021-07-02 Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU Active CN113256784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747224.9A CN113256784B (en) 2021-07-02 2021-07-02 Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747224.9A CN113256784B (en) 2021-07-02 2021-07-02 Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU

Publications (2)

Publication Number Publication Date
CN113256784A true CN113256784A (en) 2021-08-13
CN113256784B CN113256784B (en) 2021-09-28

Family

ID=77190479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747224.9A Active CN113256784B (en) 2021-07-02 2021-07-02 Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU

Country Status (1)

Country Link
CN (1) CN113256784B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
CN102332179A (en) * 2010-09-20 2012-01-25 董福田 Three-dimensional model data simplification and progressive transmission methods and devices
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN107093207A (en) * 2017-04-12 2017-08-25 武汉大学 A kind of dynamic and visual method of the natural gas leaking diffusion based on GPGPU
US20190272665A1 (en) * 2018-03-05 2019-09-05 Verizon Patent And Licensing Inc. Three-dimensional voxel mapping
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud
CN111602174A (en) * 2018-01-08 2020-08-28 普罗热尼奇制药公司 System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network
US20210150720A1 (en) * 2019-11-14 2021-05-20 Nio Usa, Inc. Object detection using local (ground-aware) adaptive region proposals on point clouds

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
CN102332179A (en) * 2010-09-20 2012-01-25 董福田 Three-dimensional model data simplification and progressive transmission methods and devices
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN107093207A (en) * 2017-04-12 2017-08-25 武汉大学 A kind of dynamic and visual method of the natural gas leaking diffusion based on GPGPU
CN111602174A (en) * 2018-01-08 2020-08-28 普罗热尼奇制药公司 System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network
US20190272665A1 (en) * 2018-03-05 2019-09-05 Verizon Patent And Licensing Inc. Three-dimensional voxel mapping
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud
US20210150720A1 (en) * 2019-11-14 2021-05-20 Nio Usa, Inc. Object detection using local (ground-aware) adaptive region proposals on point clouds

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕广宪等: "基于GPU加速的高分辨率实体体素化研究", 《地理与地理信息科学》 *
束搏等: "一种基于GPU的可视壳构建方法", 《第四届智能CAD与数字娱乐学术会议》 *
鲁林: "基于CUDA的大规模数据光线投射体绘制技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN113256784B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN109285220B (en) Three-dimensional scene map generation method, device, equipment and storage medium
US10984582B2 (en) Smooth draping layer for rendering vector data on complex three dimensional objects
CN110738721B (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
CN108463837B (en) System and method for rendering multiple levels of detail
KR101140460B1 (en) Tile based graphics rendering
Mattausch et al. Chc++: Coherent hierarchical culling revisited
JP6356784B2 (en) Apparatus and method for radiance transfer sampling for augmented reality
KR101482578B1 (en) Multi-view ray tracing using edge detection and shader reuse
EP3346448B1 (en) Graphics processing method and system
KR100829561B1 (en) Method for rendering 3D graphic data and apparatus therefore
EP2960869A2 (en) Ct system for security check and method thereof
US9019271B2 (en) Z-culling method, three-dimensional graphics processing method and apparatus threrof
KR20100035622A (en) A fragment shader for a hybrid raytracing system and method of operation
EP2806396A1 (en) Sparse light field representation
EP2369565A1 (en) Method for re-using photorealistic 3D landmarks for nonphotorealistic 3D maps
US8614704B2 (en) Method and apparatus for rendering 3D graphics data
CN112764004A (en) Point cloud processing method, device, equipment and storage medium
Papaioannou et al. Real-time volume-based ambient occlusion
CN114511659B (en) Volume rendering optimization method under digital earth terrain constraint
CN113256784B (en) Method for performing super-efficient drawing of GIS space three-dimensional voxel data based on GPU
US11488332B1 (en) Intensity data visualization
Solteszova et al. Output‐Sensitive Filtering of Streaming Volume Data
Haeling et al. Dense urban scene reconstruction using stereo depth image triangulation
CN114155346B (en) Data processing method and device for terrain matching
Böhm Terrestrial LiDAR in urban data acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 430000 Wuda science and Technology Park, Jiangxia Avenue, Miaoshan District, Donghu Development Zone, Wuhan City, Hubei Province

Patentee after: Geospace Information Technology Co.,Ltd.

Address before: 430000 Wuda science and Technology Park, Jiangxia Avenue, Miaoshan District, Donghu Development Zone, Wuhan City, Hubei Province

Patentee before: WUDA GEOINFORMATICS Co.,Ltd.