US10776111B2 - Point cloud data method and apparatus - Google Patents

Point cloud data method and apparatus Download PDF

Info

Publication number
US10776111B2
US10776111B2 US15/988,380 US201815988380A US10776111B2 US 10776111 B2 US10776111 B2 US 10776111B2 US 201815988380 A US201815988380 A US 201815988380A US 10776111 B2 US10776111 B2 US 10776111B2
Authority
US
United States
Prior art keywords
point cloud
point
data set
cloud data
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/988,380
Other versions
US20190018680A1 (en
Inventor
Ivan Charamisinau
Michael Burenkov
Dmitry DATKO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Topcon Positioning Systems Inc
Original Assignee
Topcon Positioning Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Topcon Positioning Systems Inc filed Critical Topcon Positioning Systems Inc
Priority to US15/988,380 priority Critical patent/US10776111B2/en
Assigned to TOPCON POSITIONING SYSTEMS, INC. reassignment TOPCON POSITIONING SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BURENKOV, MICHAEL, DATKO, Dmitry
Assigned to TOPCON POSITIONING SYSTEMS, INC. reassignment TOPCON POSITIONING SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHARAMISINAU, IVAN, DATKO, Dmitry, BURENKOV, MICHAEL
Priority to EP18752895.5A priority patent/EP3652706A2/en
Priority to PCT/US2018/041181 priority patent/WO2019014078A2/en
Publication of US20190018680A1 publication Critical patent/US20190018680A1/en
Application granted granted Critical
Publication of US10776111B2 publication Critical patent/US10776111B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30025Format conversion instructions, e.g. Floating-Point to Integer, decimal conversion
    • G06K9/0063
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame

Definitions

  • the present invention relates to point cloud processing, and, more particularly, to point cloud rendering for real-time point cloud data collected from a variety of sensor types.
  • 3D sensing systems and 3D imaging data is commonly used for generating 3D images of a location for use in various applications.
  • 3D images are commonly used for generating topographical maps or for surveillance of a location, and such sensing systems typically operate by capturing elevation data associated with the location.
  • LIDAR Light Detection and Ranging
  • data is generated by recording multiple range echoes from a single pulse of laser light to generate a frame (also referred to as an image frame).
  • Each frame of LIDAR data is comprised of a collection of points in three dimensions (i.e., a 3D point cloud) which correspond to multiple range echoes within a sensor aperture. These points may be organized into so-called “voxels” which represent values on a regular grid in a 3D space.
  • Voxels used in 3D imaging are akin to pixels used in a two-dimensional (2D) imaging device context. These frames may be processed to reconstruct a 3D image of a location where each point in the 3D point cloud has an individual (x, y, z) value representing the actual surface within the 3D scene under investigation.
  • LIDAR sensors collect vast amounts of data with scan rates approaching one million measurements per second. As such, these 3D sensing systems make efficient storage, data processing, and data visualization challenging given the data set sizes collected.
  • a point cloud rendering method and apparatus for real-time point cloud data collection from a variety of sensor types that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.
  • the processing of large 3D point cloud data sets collected from a variety of sensors is facilitated using a particular tree traversing library.
  • a particular tree traversing library In the core of the library, in accordance with the embodiment is a succinct representation of full binary tree(s) and algorithm(s) for operating with the succinct representation for tree traversing and executing so-called range queries on the binary tree.
  • This representation allows for an efficient and generic way for assigning attributes of any type to inner and leaf nodes of full binary trees.
  • a provably succinct representation and traversing algorithm is employed for arbitrary binary trees and generic storage for attributes assigned to inner and leaf nodes of binary trees.
  • This provably succinct binary tree representation is utilized for building binary space partition trees and to apply spatial indexing, executing spatial queries, and applying nearest neighbor searches in point clouds.
  • the tree traversing library provides a way to store and to efficiently traverse extremely large (e.g., billions of nodes) binary trees, retrieve and process assigned node attribute data, and run queries; all completely out of core, without the requirement to load and keep all the data in memory.
  • extremely large binary trees e.g., billions of nodes
  • the tree traversing library, associated data structures, algorithms and/or file formats can be used generically for efficient storing and running queries on a collection of arbitrary structured data types of fixed and variable length.
  • the tree traversing library can further be considered as a lossless controlled lossy data compression technique providing efficient arbitrary access to compressed data which does not require full decompression. This enables efficient point cloud storage with improved data compression, high performance queries, and flexibility in supporting attributes to each point(s).
  • FIG. 1 shows a flowchart of illustrative operations for out-of-core rendering of large point cloud data sets in accordance with an embodiment
  • FIG. 2 shows an illustrative BSP tree constructed in accordance with an embodiment
  • FIG. 3 shows an illustrative compacted prefix tree in accordance with the embodiment
  • FIG. 4 shows a flowchart of illustrative operations for frustum culling using the compacted prefix tree of FIG. 3 in accordance with an embodiment
  • FIG. 5 shows illustrative results obtained in performing the operations set forth in FIG. 4 ;
  • FIG. 6 is a high-level block diagram of an exemplary computer in accordance with an embodiment
  • FIG. 7 shows a flowchart of illustrative operations for point cloud filtering in accordance with an embodiment
  • FIG. 8 shows an illustrative point cloud filtered in accordance with the operation of FIG. 7 in accordance with an embodiment
  • FIG. 9 shows an illustrative example of the principal component grids having overlapping boxes for a point cloud in accordance with an embodiment
  • FIG. 10 shows an illustrative Gauss grid for a point cloud in accordance with an embodiment.
  • a point cloud rendering method and apparatus for real-time point cloud data collection from a variety of sensor types that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.
  • the processing of large 3D point cloud data collected from a variety of sensors is facilitated using a particular tree traversing library.
  • a particular tree traversing library In the core of the library, in accordance with an embodiment is a succinct representation of full binary tree(s) and algorithm(s) for operating with the succinct representation for tree traversing and executing range queries on the binary tree. This representation allows for an efficient and generic way for assigning attributes of any type to inner and leaf nodes of full binary trees.
  • a provably succinct representation and traversing algorithm is employed for arbitrary binary trees and generic storage for attributes assigned to inner and leaf nodes of binary trees.
  • This provably succinct binary tree representation is utilized for building binary space partition trees and to apply spatial indexing, executing spatial queries, and applying nearest neighbor searches in point clouds.
  • succinct representations of binary trees are well-known. Existing representations allow for efficient pre-order or level-order tree traversal and provide a way to efficiently assign data attributes of the same type to tree nodes.
  • known succinct implementations do not guarantee good locality of data and do not allow adding node attributes of arbitrary-type separately to inner nodes and leaves.
  • good locality it is meant herein that accessing node data and traversing to nodes that are close in the tree requires access to bytes that reside close(er) in memory.
  • the embodiments herein are both succinct and allow fast random traversing and separate attributes for inner-nodes and leaves of the tree.
  • the embodiments herein exploit certain succinct binary tree representations described for compact data storage and out-of-core (i.e., without loading the whole data set into memory) data rendering and processing, and facilitates a way to store and access point cloud coordinates and attribute data in a generic way allowing for an enhanced and unified application programming interface (API) for various versions of data storage formats.
  • API application programming interface
  • This further allows for efficient arbitrary tree traversals (including pre-order, in-order, and/or post-order traversals), and for the efficient assigning of different types of data attributes to inner tree nodes and to tree leaves.
  • FIG. 1 shows a flowchart of illustrative operations 100 for out-of-core rendering of large point cloud data sets (e.g., LIDAR data sets) in accordance with an embodiment.
  • a set of n floating-point coordinates is received defined as: ⁇ [ x 0 float ,y 0 float ,z 0 float ] . . .
  • p a specified precision
  • the scrambled coordinates are lexicographically sorted as unsigned integers employing any number of conventional sorting algorithms to construct and output the constructed prefix tree, at steps 135 and 140 , respectively.
  • the constructed prefix tree is a binary space partitioning (BSP) tree constructed using well-known Morton encoding in which tree nodes represent hyper-rectangular regions of space and tree branches represent hyper-planar partitions of these hyper-rectangles into halves.
  • BSP prefix tree is one in which tree nodes represent hyper-rectangular regions of space and tree branches represent hyper-planar partitions of these hyper-rectangles into halves.
  • FIG. 2 shows an illustrative BSP tree 200 constructed in accordance with an embodiment and the aforementioned operations.
  • BSP tree 200 is comprised of root node 205 and tree nodes 210 - 1 through 210 - 16 .
  • the portioning is shown in graph 220 .
  • graph 220 For example, given a two-dimensional point cloud with ⁇ points (shown as circles) with a 4 by 4 bounding box, and where X and Y coordinates are given in binary code the process of building the tree (shown on the right hand side of FIG. 2 ) by dividing the bounding box in half along the X-axis (where x 0 represents the most significant bit of the X coordinate).
  • FIG. 3 shows an illustrative compacted prefix tree 300 in accordance with the embodiment.
  • compacted prefix tree 300 comprises root node 305 and child nodes 310 - 1 through 310 - 11 .
  • Compacted prefix tree 300 has many useful properties, for example, these tree types have fewer nodes and are a fully binary tree which are required by the operations herein.
  • compacted prefix tree 300 is employed to store large amounts of data (e.g., large numbers of nodes) and is traversed in order to retrieve and process assigned node attribute data, and execute any number of different queries, all completely out-of-core
  • well-known techniques such as spatial indexing, spatial queries, viewing frustum, and nearest neighbor searching in point clouds are applied to compacted prefix tree 300 .
  • view frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process. Rending such objects from the collected data set would waste precious processing cycles given that these objects are not directly visible.
  • this is usually done using bounding volumes surrounding the objects rather than the objects themselves, as will now be discussed.
  • FIG. 4 shows a flowchart of illustrative operations 400 for a spatial query (i.e., enumerate all points within given query region of space) using the prefix tree of FIG. 3 in accordance with an embodiment.
  • every query starts, at step 410 , as a query for the root node.
  • a prefix tree query is executed (and the axis-aligned node bounding box is calculated) in order to determine, at step 425 , whether there is an intersection of the query region with the axis-aligned bounding box (AABB) of the root node (e.g., root node 305 ). If there is no intersection, the process is aborted.
  • AABB axis-aligned bounding box
  • step 430 a determination is made if the query region completely contains the AABB. If so, the process short-circuits to the leaf descendants (i.e., directly enumerate all leaves under this node) at step 435 . Otherwise, if the query region partially contains the AABB, as determined in step 430 , there is a recursive processing of the node's children, if any, at steps 440 and 445 for the left and right children. Recursive processing of the node's children means that the process node query steps (i.e., steps 420 - 445 ) are executed for both the left child and right child of the current node.
  • processing child nodes can invoke processing of grand-children nodes and so on.
  • the process reaches a point where leaf nodes do not have children, and the bounding box is a point for them, such that partial intersection is impossible for them, so such node is either in or out.
  • FIG. 5 shows illustrative results 500 obtained in performing the operations set forth in FIG. 4 .
  • query region 505 results in root node 510 having children nodes 515 - 1 through 515 - 7 .
  • Query region 505 partially intersects the bounding box of the full data set (i.e., AABB of root node), so the children are processed.
  • this node is enumerated as a result of the query.
  • FIG. 6 is a high-level block diagram of an exemplary computer 600 that may be used for implementing point cloud rendering for real-time point cloud data collection from a variety of sensor types in accordance with the various embodiments herein.
  • Computer 600 comprises a processor 610 operatively coupled to a data storage device 620 and a memory 630 .
  • Processor 610 controls the overall operation of computer 600 by executing computer program instructions that define such operations.
  • Communications bus 660 facilitates the coupling and communication between the various components of computer 600 .
  • computer 600 may be any type of computing device such a computer, tablet, server, mobile device, smart phone, to name just a few.
  • the computer program instructions may be stored in data storage device 620 , or a non-transitory computer readable medium, and loaded into memory 630 when execution of the computer program instructions is desired.
  • the steps of the disclosed method can be defined by the computer program instructions stored in memory 630 and/or data storage device 620 and controlled by processor 610 executing the computer program instructions.
  • the computer program instructions can be implemented as computer executable code programmed by one skilled in the art to perform the illustrative operations defined by the disclosed method.
  • processor 610 executes an algorithm defined by the disclosed method.
  • Computer 600 also includes one or more communication interfaces 650 for communicating with other devices via a network (e.g., a wireless communications network) or communications protocol (e.g., Bluetooth®).
  • a network e.g., a wireless communications network
  • communications protocol e.g., Bluetooth®
  • Computer 600 also includes one or more input/output devices 640 that enable user interaction with the user device (e.g., camera, display, keyboard, mouse, speakers, microphone, buttons, etc.).
  • input/output devices 640 that enable user interaction with the user device (e.g., camera, display, keyboard, mouse, speakers, microphone, buttons, etc.).
  • Processor 610 may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer 600 .
  • Processor 610 may comprise one or more central processing units (CPUs), for example.
  • CPUs central processing units
  • Processor 610 , data storage device 620 , and/or memory 630 may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs).
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Data storage device 620 and memory 630 each comprise a tangible non-transitory computer readable storage medium.
  • Data storage device 620 , and memory 630 may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices.
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • DDR RAM double data rate synchronous dynamic random access memory
  • non-volatile memory such as
  • Input/output devices 640 may include peripherals, such as a camera, printer, scanner, display screen, etc.
  • input/output devices 640 may include a display device such as a cathode ray tube (CRT), plasma or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer 600 .
  • a display device such as a cathode ray tube (CRT), plasma or liquid crystal display (LCD) monitor for displaying information to the user
  • keyboard a keyboard
  • pointing device such as a mouse or a trackball by which the user can provide input to computer 600 .
  • a point cloud filter method and apparatus for use in point cloud rendering from real-time point cloud data collection (e.g., as detailed herein above) from a variety of sensor types is provided that delivers enhanced performance including reducing processing requirements, limiting local memory consumption, and optimizing overall data visualization.
  • a point cloud filter is employed to smooth and/or resample collected point cloud data (e.g., as detailed herein above) in order to smooth the associated point cloud thereby increasing precision while preserving the details and features from the original point cloud data set.
  • flat surfaces defined in the point cloud data set are detected and adjustments are made in individual point position (e.g., towards the associated surface) in order to reduce the overall noise level in the collected point cloud data.
  • an estimation is made with respect to the overall feature amount in the point cloud data set and resampling is applied to those particular areas requiring less feature definition (e.g., planar surface) in order to preserve the original density for areas with increased feature definitions.
  • at least three parameters are utilized in the removal of redundant data points: (i) Sigma ( ⁇ ): defined as the size of the Gaussian filter aperture. All feature details smaller than ⁇ will be treated as noise and removed.
  • the overall feature preservation applied is governed by a specified feature parameter in the range of 1 to 1000 (where 1 means the entire data set is smoothed, and 1000 means no smoothing is applied whatsoever and everything in the point cloud data set is a feature).
  • optimal feature preservation values are in the range of 10 to 16; and (iii) caching type: this parameter dictates whether filtering caching will be applied or not which translates to the overall processing speed (i.e., either precise or fast) applied for feature detection in the point cloud filter.
  • overall processing speed i.e., either precise or fast
  • two computational speeds are facilitated and defined as “precise” and “fast”, respectively, wherein the precise speed applies certain filtering caching and the fast speed does not apply any such caching.
  • FIG. 7 shows a flowchart of illustrative operations 700 for point cloud filtering in accordance with an embodiment.
  • the operations encompass both of the aforementioned precise and fast embodiments which are differentiated, in part, by the application of the filter cache, as will be detailed below.
  • the point cloud data set is received and, at step 710 , redundant points are removed.
  • at least three parameters i.e., 6, feature preservation, and caching
  • these parameters may be, illustratively, user-defined or automatically assigned by the filtering system/processor.
  • a determination is made as to whether caching will be applied.
  • caching may be applied to accelerate the overall processing speed of the filtering operations (i.e., the fast speed embodiment) or not applied for more precision (i.e., the precision speed embodiment). If caching is to be applied, the data points are stored in a filter cache, at step 720 , and are processed until the full point cloud data set is processed.
  • the filtering operations continue such that for every point P i in the point cloud, all points P j that belong to some area (A) around point P i are processed.
  • area A is defined as a 4 ⁇ *4 ⁇ *4 ⁇ cube in well-known earth-centered, earth-fixed (ECEF) coordinates.
  • PCA principal component analysis
  • FIG. 8 shows an illustrative point cloud 800 filtered in accordance with an embodiment.
  • point cloud 800 is filtered, as detailed above, such that for every point P i in the point cloud, all points P j 810 - 1 that belong to area (A) 805 around point P i 810 - 2 are processed with eigenvalues 815 - 1 , 815 - 2 and 815 - 3 , and eigenvectors 820 - 1 , 820 - 2 and 820 - 3 , respectively.
  • v 2 is perpendicular to the other respective eigenvectors (i.e., v 1 , and v 3 )
  • Gaussian filtration (a well-known technique) for point P i 810 - 1 is performed in order to find position correction vector F i as follows:
  • a new correction G i ( F i ⁇ v 3 ) v 3
  • G i ( F i ⁇ v 3 ) v 3
  • the filtered point cloud data set results are output.
  • the point cloud comprises mainly surfaces (e.g., terrain, buildings, etc.) with density given by: Density [points/m 2 ], then the overall complexity is O(N*Density* ⁇ 2 ). As such, processing may become slower with the square of the sigma parameter setting (i.e., for deep filtration). It will also be noted that this is just the complexity of filtration itself, and the complexity of the point cloud database random access is different (O(N*log N)).
  • the complexity associated with the “precise” speed embodiment herein is equal to the complexity where calculating S and F i is equal to O(N*M avg ), and M avg is the average number of points per query (at steps 825 and 835 , as detailed above).
  • the point cloud mainly comprises surfaces (e.g., terrain, buildings, etc.) with a “Density” given by points/m 2 and a total area A in m 2
  • the complexity may become O(N 2 ) if the density is increasing at a constant area and this is significantly slower that the theoretical maximum O(N)*log N).
  • processing speed may be varied, in accordance with an embodiment, by employing filter caching (i.e., fast speed) such that the filtering operations build several grids on top of the filter cache every time the cache is reloads.
  • filter caching i.e., fast speed
  • all the point calculations detailed above will use only these several grids such the overall complexity of the filtering operations does not depend upon the total number of points in the filter cache and the complexity approaches O(N).
  • fast speed embodiment is used which processes points from the data set in batches. Each batch takes points that fall into the box and builds three (3) different grids on top of box (i.e., initial box plus 2a from all sides). Then all per point calculations use only these grids so that overall complexity does not depend on the number of points in filter cache and thus is close to O(N).
  • the 3 grids are defined as: big, small, and Gauss.
  • Big and small grids are used to store principal component analysis, and the Gauss grid for Gauss filter calculations. Big and small grids have 9*9*9 cells, each cell represents a box of 4 ⁇ *4 ⁇ *4 ⁇ (i.e., big grid) or 2 ⁇ *2 ⁇ *2 ⁇ (i.e., small grid) centered around (x 0 +i* ⁇ ,y 0 +j* ⁇ ,y 0 +k* ⁇ ), where (x 0 ,y 0 ,y 0 ) is a center of the filter cache, and i, j, k are integers ranging from ⁇ 4 to +4.
  • the boxes for grid cells are overlapping and when filter cache is loaded principal component analysis (i.e., calculate the S matrix, vectors and lambdas) is performed and with the storage of eigenvalues and eigenvectors separately for each cell(s) in these grids, with an illustrative example shown in FIG. 9 .
  • the three principal component grids have overlapping boxes (i.e., box 910 , 920 , and 930 ) with eigenvalues (i.e., eigenvalue 940 and 950 ) and eigenvectors (i.e., eigenvectors 960 and 970 ) stored from the principal component analysis.
  • the Gauss grid has 24*24*24 cells, 1 ⁇ 2 ⁇ *1 ⁇ 2 ⁇ *1 ⁇ 2 ⁇ each, no overlaps, and covers the whole 12 ⁇ *12 ⁇ *12 ⁇ of the filter cache.
  • FIG. 10 shows an illustrative Gauss grid 1040 for a point cloud in accordance with an embodiment.
  • Each cell i.e., cell 1005 , 1010 , 1015 , 1020 ) stores a number m of points of the point cloud (i.e., point 1030 - 1 , 1030 - 2 , 1030 - 3 , 1030 -N) that fall in this cell and their center mass (i.e., center mass 1050 - 1 , 1050 - 2 , 1050 - 3 , and 1050 - 4 ), as shown in FIG. 10 .
  • Calculating the correction vector F i precedes as detailed above (see steps 735 - 740 in FIG.
  • the Gaussian filter calculation always takes a number of steps defined by 7*7*7 regardless of the density and Sigma parameters.
  • the correction vector F i calculation is given by:
  • the small grids help to detect smaller objects (e.g., objects that could be lost in the big grid if the rest of the surface is perfectly flat), and the big grids help with a big object with many points that can disturb a flat surface even from further away such that the small grid does not see the object at all.
  • the points in the input filter cache are enumerated one time to build the respective grids.
  • filtration also takes a fixed number of steps per point, and as mentioned above, O(N) defines the complexity of the filtration itself and the complexity of point cloud database random access is at least O(N*log N).
  • DSP digital signal processor
  • any flowcharts, flow diagrams, state transition diagrams, pseudo code, program code and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer, machine or processor, whether or not such computer, machine or processor is explicitly shown.
  • One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and that a high level representation of some of the components of such a computer is for illustrative purposes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Image Generation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A point cloud rendering method and apparatus for real-time point cloud data collection from a variety of sensor types is provided that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.

Description

CROSS REFERENCE TO RELATED APPLICATION
This application claims the benefit of U.S. Provisional Application No. 62/531,495, filed Jul. 12, 2017, the disclosure of which is herein incorporated by reference in its entirety.
TECHNICAL FIELD
The present invention relates to point cloud processing, and, more particularly, to point cloud rendering for real-time point cloud data collected from a variety of sensor types.
BACKGROUND
The use of three-dimensional (3D) sensing systems and 3D imaging data is commonly used for generating 3D images of a location for use in various applications. For example, such 3D images are commonly used for generating topographical maps or for surveillance of a location, and such sensing systems typically operate by capturing elevation data associated with the location.
An example of one well-known 3D imaging system that generates 3D point cloud data is a Light Detection and Ranging (LIDAR) system. Typically, such LIDAR systems use a high-energy laser, optical detector, and timing circuitry to determine the distance to a target. In such LIDAR systems, data is generated by recording multiple range echoes from a single pulse of laser light to generate a frame (also referred to as an image frame). Each frame of LIDAR data is comprised of a collection of points in three dimensions (i.e., a 3D point cloud) which correspond to multiple range echoes within a sensor aperture. These points may be organized into so-called “voxels” which represent values on a regular grid in a 3D space. Voxels used in 3D imaging are akin to pixels used in a two-dimensional (2D) imaging device context. These frames may be processed to reconstruct a 3D image of a location where each point in the 3D point cloud has an individual (x, y, z) value representing the actual surface within the 3D scene under investigation.
As will be appreciated, LIDAR sensors collect vast amounts of data with scan rates approaching one million measurements per second. As such, these 3D sensing systems make efficient storage, data processing, and data visualization challenging given the data set sizes collected.
Therefore, a need exists for a point cloud rendering and point cloud filtering technique for real-time point cloud data collection from a variety of sensor types that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.
BRIEF SUMMARY OF THE EMBODIMENTS
In accordance with various embodiments, a point cloud rendering method and apparatus for real-time point cloud data collection from a variety of sensor types is provided that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.
More particularly, in accordance with an embodiment, the processing of large 3D point cloud data sets collected from a variety of sensors (e.g., mobile mapping systems, terrestrial laser scanners, and unmanned aircraft systems, to name just a few) is facilitated using a particular tree traversing library. In the core of the library, in accordance with the embodiment is a succinct representation of full binary tree(s) and algorithm(s) for operating with the succinct representation for tree traversing and executing so-called range queries on the binary tree. This representation allows for an efficient and generic way for assigning attributes of any type to inner and leaf nodes of full binary trees.
In conjunction with this full binary succinct representation, in accordance with the embodiment, a provably succinct representation and traversing algorithm is employed for arbitrary binary trees and generic storage for attributes assigned to inner and leaf nodes of binary trees. This provably succinct binary tree representation is utilized for building binary space partition trees and to apply spatial indexing, executing spatial queries, and applying nearest neighbor searches in point clouds.
The tree traversing library, in accordance with an embodiment, provides a way to store and to efficiently traverse extremely large (e.g., billions of nodes) binary trees, retrieve and process assigned node attribute data, and run queries; all completely out of core, without the requirement to load and keep all the data in memory. In this way, the tree traversing library, associated data structures, algorithms and/or file formats can be used generically for efficient storing and running queries on a collection of arbitrary structured data types of fixed and variable length. The tree traversing library can further be considered as a lossless controlled lossy data compression technique providing efficient arbitrary access to compressed data which does not require full decompression. This enables efficient point cloud storage with improved data compression, high performance queries, and flexibility in supporting attributes to each point(s).
These and other advantages of the embodiments will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a flowchart of illustrative operations for out-of-core rendering of large point cloud data sets in accordance with an embodiment;
FIG. 2 shows an illustrative BSP tree constructed in accordance with an embodiment;
FIG. 3 shows an illustrative compacted prefix tree in accordance with the embodiment;
FIG. 4 shows a flowchart of illustrative operations for frustum culling using the compacted prefix tree of FIG. 3 in accordance with an embodiment;
FIG. 5 shows illustrative results obtained in performing the operations set forth in FIG. 4;
FIG. 6 is a high-level block diagram of an exemplary computer in accordance with an embodiment;
FIG. 7 shows a flowchart of illustrative operations for point cloud filtering in accordance with an embodiment;
FIG. 8 shows an illustrative point cloud filtered in accordance with the operation of FIG. 7 in accordance with an embodiment;
FIG. 9 shows an illustrative example of the principal component grids having overlapping boxes for a point cloud in accordance with an embodiment; and
FIG. 10 shows an illustrative Gauss grid for a point cloud in accordance with an embodiment.
DETAILED DESCRIPTION
In accordance with various embodiments, a point cloud rendering method and apparatus for real-time point cloud data collection from a variety of sensor types is provided that delivers enhanced performance including reducing processing requirements, limiting local memory consumption and optimizing overall data visualization.
More particularly, in accordance with an embodiment, the processing of large 3D point cloud data collected from a variety of sensors (e.g., mobile mapping systems, terrestrial laser scanners, and unmanned aircraft systems, to name just a few) is facilitated using a particular tree traversing library. In the core of the library, in accordance with an embodiment is a succinct representation of full binary tree(s) and algorithm(s) for operating with the succinct representation for tree traversing and executing range queries on the binary tree. This representation allows for an efficient and generic way for assigning attributes of any type to inner and leaf nodes of full binary trees.
In conjunction with this full binary succinct representation, in accordance with an embodiment, a provably succinct representation and traversing algorithm is employed for arbitrary binary trees and generic storage for attributes assigned to inner and leaf nodes of binary trees. This provably succinct binary tree representation is utilized for building binary space partition trees and to apply spatial indexing, executing spatial queries, and applying nearest neighbor searches in point clouds.
As will be appreciated, succinct representations of binary trees are well-known. Existing representations allow for efficient pre-order or level-order tree traversal and provide a way to efficiently assign data attributes of the same type to tree nodes. However, known succinct implementations do not guarantee good locality of data and do not allow adding node attributes of arbitrary-type separately to inner nodes and leaves. By good locality, it is meant herein that accessing node data and traversing to nodes that are close in the tree requires access to bytes that reside close(er) in memory. On the other hand, the embodiments herein are both succinct and allow fast random traversing and separate attributes for inner-nodes and leaves of the tree.
The embodiments herein exploit certain succinct binary tree representations described for compact data storage and out-of-core (i.e., without loading the whole data set into memory) data rendering and processing, and facilitates a way to store and access point cloud coordinates and attribute data in a generic way allowing for an enhanced and unified application programming interface (API) for various versions of data storage formats. This further allows for efficient arbitrary tree traversals (including pre-order, in-order, and/or post-order traversals), and for the efficient assigning of different types of data attributes to inner tree nodes and to tree leaves.
FIG. 1 shows a flowchart of illustrative operations 100 for out-of-core rendering of large point cloud data sets (e.g., LIDAR data sets) in accordance with an embodiment. At step 105, a set of n floating-point coordinates is received defined as:
{[x 0 float ,y 0 float ,z 0 float] . . . [x n float ,y n float ,z n float]}
At step 110, the received floating-point coordinates are received and translated to an origin given by:
[
Figure US10776111-20200915-P00001
j float,
Figure US10776111-20200915-P00002
j float,
Figure US10776111-20200915-P00003
j float]=[x j float ,y j float ,z j float]−[x 0 float ,y 0 float ,z 0 float].
These coordinates are encoded, at step 115, as integers with a specified precision p (e.g., p=1000 for 1 millimeter precision or p=100 for 1 centimeter precision) as follows:
[x j int ,y j int ,z j int]=[
Figure US10776111-20200915-P00002
j float,
Figure US10776111-20200915-P00002
j float,
Figure US10776111-20200915-P00003
j floatp.
Next, at step 120, there is an application of shift coordinates as unsigned integers given by:
[x j uint ,y j uint ,z j uint]=[x j int ,y j int ,z j int]+INT_MAX/2,
and m bits of (x, y, z) unsigned integer coordinates of each point are scrambled, at step 125, as follows:
{ [ x 0 . . . x m ] [ y 0 . . . y m ] [ z 0 . . . z m ] } { [ x 0 , y 0 , z 0 ] . . . [ x m , y m , z m ] } .
At step 130, the scrambled coordinates are lexicographically sorted as unsigned integers employing any number of conventional sorting algorithms to construct and output the constructed prefix tree, at steps 135 and 140, respectively. The constructed prefix tree is a binary space partitioning (BSP) tree constructed using well-known Morton encoding in which tree nodes represent hyper-rectangular regions of space and tree branches represent hyper-planar partitions of these hyper-rectangles into halves. In this way, the constructed BSP prefix tree is one in which tree nodes represent hyper-rectangular regions of space and tree branches represent hyper-planar partitions of these hyper-rectangles into halves.
FIG. 2 shows an illustrative BSP tree 200 constructed in accordance with an embodiment and the aforementioned operations. As shown, BSP tree 200 is comprised of root node 205 and tree nodes 210-1 through 210-16. The portioning is shown in graph 220. For example, given a two-dimensional point cloud with σ points (shown as circles) with a 4 by 4 bounding box, and where X and Y coordinates are given in binary code the process of building the tree (shown on the right hand side of FIG. 2) by dividing the bounding box in half along the X-axis (where x0 represents the most significant bit of the X coordinate). Each half is then divided along the Y-axis where y0 represents the most significant bit of the Y coordinate. Note that branch x0=0, y0=1 is not created because the bounding box for this branch is empty. The operations are then repeated for the second-significant bit in X and Y coordinate
FIG. 3 shows an illustrative compacted prefix tree 300 in accordance with the embodiment. As shown, compacted prefix tree 300 comprises root node 305 and child nodes 310-1 through 310-11. Compacted prefix tree 300 has many useful properties, for example, these tree types have fewer nodes and are a fully binary tree which are required by the operations herein.
In accordance with an embodiment, compacted prefix tree 300 is employed to store large amounts of data (e.g., large numbers of nodes) and is traversed in order to retrieve and process assigned node attribute data, and execute any number of different queries, all completely out-of-core Illustratively, well-known techniques such as spatial indexing, spatial queries, viewing frustum, and nearest neighbor searching in point clouds are applied to compacted prefix tree 300. For example, view frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process. Rending such objects from the collected data set would waste precious processing cycles given that these objects are not directly visible. Typically, to speed up the culling, this is usually done using bounding volumes surrounding the objects rather than the objects themselves, as will now be discussed.
FIG. 4 shows a flowchart of illustrative operations 400 for a spatial query (i.e., enumerate all points within given query region of space) using the prefix tree of FIG. 3 in accordance with an embodiment. In particular, every query starts, at step 410, as a query for the root node. At step 420, a prefix tree query is executed (and the axis-aligned node bounding box is calculated) in order to determine, at step 425, whether there is an intersection of the query region with the axis-aligned bounding box (AABB) of the root node (e.g., root node 305). If there is no intersection, the process is aborted. If there is an intersection then, at step 430, a determination is made if the query region completely contains the AABB. If so, the process short-circuits to the leaf descendants (i.e., directly enumerate all leaves under this node) at step 435. Otherwise, if the query region partially contains the AABB, as determined in step 430, there is a recursive processing of the node's children, if any, at steps 440 and 445 for the left and right children. Recursive processing of the node's children means that the process node query steps (i.e., steps 420-445) are executed for both the left child and right child of the current node. Of course, processing child nodes can invoke processing of grand-children nodes and so on. Eventually the process reaches a point where leaf nodes do not have children, and the bounding box is a point for them, such that partial intersection is impossible for them, so such node is either in or out.
For example, FIG. 5 shows illustrative results 500 obtained in performing the operations set forth in FIG. 4. As shown, query region 505 results in root node 510 having children nodes 515-1 through 515-7. Query region 505 partially intersects the bounding box of the full data set (i.e., AABB of root node), so the children are processed. Further, query region 505 does not intersect the AABB of the left child (i.e., x0=0), so this node can be skipped, and query region 505 partially intersects the AABB of the right child (x0=1), so these children nodes are processes. As shown, left child (i.e., x0=1, y0=0, x1=0, y1=0) is a leaf node which is within the query region, so this node is enumerated as a result of the query. Query region 505 does not intersect the AABB of the right child (x0=1, y0=1), so this node is skipped. Therefore, this illustrative query returns just one node (i.e., x=10, y=00).
As detailed above, the various embodiments herein can be embodied in the form of methods and apparatuses for practicing those methods. The disclosed methods may be performed by a combination of hardware, software, firmware, middleware, and computer-readable medium (collectively “computer”) installed in and/or communicatively connected to a user device. FIG. 6 is a high-level block diagram of an exemplary computer 600 that may be used for implementing point cloud rendering for real-time point cloud data collection from a variety of sensor types in accordance with the various embodiments herein. Computer 600 comprises a processor 610 operatively coupled to a data storage device 620 and a memory 630. Processor 610 controls the overall operation of computer 600 by executing computer program instructions that define such operations. Communications bus 660 facilitates the coupling and communication between the various components of computer 600. Of course, computer 600 may be any type of computing device such a computer, tablet, server, mobile device, smart phone, to name just a few. The computer program instructions may be stored in data storage device 620, or a non-transitory computer readable medium, and loaded into memory 630 when execution of the computer program instructions is desired.
Thus, the steps of the disclosed method (see, e.g., FIGS. 1 4, and 7) and the associated discussion herein above) can be defined by the computer program instructions stored in memory 630 and/or data storage device 620 and controlled by processor 610 executing the computer program instructions. For example, the computer program instructions can be implemented as computer executable code programmed by one skilled in the art to perform the illustrative operations defined by the disclosed method. Accordingly, by executing the computer program instructions, processor 610 executes an algorithm defined by the disclosed method. Computer 600 also includes one or more communication interfaces 650 for communicating with other devices via a network (e.g., a wireless communications network) or communications protocol (e.g., Bluetooth®). For example, such communication interfaces may be a receiver, transceiver or modem for exchanging wired or wireless communications in any number of well-known fashions. Computer 600 also includes one or more input/output devices 640 that enable user interaction with the user device (e.g., camera, display, keyboard, mouse, speakers, microphone, buttons, etc.).
Processor 610 may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer 600. Processor 610 may comprise one or more central processing units (CPUs), for example. Processor 610, data storage device 620, and/or memory 630 may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs).
Data storage device 620 and memory 630 each comprise a tangible non-transitory computer readable storage medium. Data storage device 620, and memory 630, may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices.
Input/output devices 640 may include peripherals, such as a camera, printer, scanner, display screen, etc. For example, input/output devices 640 may include a display device such as a cathode ray tube (CRT), plasma or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer 600.
In accordance with various embodiments, a point cloud filter method and apparatus for use in point cloud rendering from real-time point cloud data collection (e.g., as detailed herein above) from a variety of sensor types is provided that delivers enhanced performance including reducing processing requirements, limiting local memory consumption, and optimizing overall data visualization.
More particularly, in accordance with an embodiment, a point cloud filter is employed to smooth and/or resample collected point cloud data (e.g., as detailed herein above) in order to smooth the associated point cloud thereby increasing precision while preserving the details and features from the original point cloud data set. In accordance with the embodiment, flat surfaces defined in the point cloud data set are detected and adjustments are made in individual point position (e.g., towards the associated surface) in order to reduce the overall noise level in the collected point cloud data.
In the event of a particular dense collected point cloud data set, in accordance with an embodiment, an estimation is made with respect to the overall feature amount in the point cloud data set and resampling is applied to those particular areas requiring less feature definition (e.g., planar surface) in order to preserve the original density for areas with increased feature definitions. In the embodiment, at least three parameters are utilized in the removal of redundant data points: (i) Sigma (σ): defined as the size of the Gaussian filter aperture. All feature details smaller than σ will be treated as noise and removed. The larger the σ value the larger the smoothing and, illustratively, optimal σ ranges for the subject point cloud filter are in the range of 0.05-0.2 meters; (ii) feature preservation: for very large objects (i.e., larger than the specific σ value), sharp edges are detected and preserved from smoothing. The overall feature preservation applied is governed by a specified feature parameter in the range of 1 to 1000 (where 1 means the entire data set is smoothed, and 1000 means no smoothing is applied whatsoever and everything in the point cloud data set is a feature). Illustratively, in accordance with an embodiment, optimal feature preservation values are in the range of 10 to 16; and (iii) caching type: this parameter dictates whether filtering caching will be applied or not which translates to the overall processing speed (i.e., either precise or fast) applied for feature detection in the point cloud filter. In accordance with the embodiments herein, two computational speeds are facilitated and defined as “precise” and “fast”, respectively, wherein the precise speed applies certain filtering caching and the fast speed does not apply any such caching.
FIG. 7 shows a flowchart of illustrative operations 700 for point cloud filtering in accordance with an embodiment. The operations encompass both of the aforementioned precise and fast embodiments which are differentiated, in part, by the application of the filter cache, as will be detailed below. At step 705, the point cloud data set is received and, at step 710, redundant points are removed. As detailed above, at least three parameters (i.e., 6, feature preservation, and caching) are defined in order to facilitate the removal of redundant points, and these parameters may be, illustratively, user-defined or automatically assigned by the filtering system/processor. At step 715, a determination is made as to whether caching will be applied. In accordance with an embodiment, caching may be applied to accelerate the overall processing speed of the filtering operations (i.e., the fast speed embodiment) or not applied for more precision (i.e., the precision speed embodiment). If caching is to be applied, the data points are stored in a filter cache, at step 720, and are processed until the full point cloud data set is processed.
At step 725, the filtering operations continue such that for every point Pi in the point cloud, all points Pj that belong to some area (A) around point Pi are processed. In particular, area A is defined as a 4σ*4σ*4σ cube in well-known earth-centered, earth-fixed (ECEF) coordinates. Next, at step 730, well-known principal component analysis (PCA) is applied to identify a covariance matrix S defined as:
S = [ S xx S xy S xz S xy S yy S yz S xz S xz S xz ] where : S x = 1 N P j A P j , x , S y = 1 N P j A P j , y , S z = 1 N P j A P j , z S xx = - S x 2 + 1 N P j A P j , x 2 , S yy = - S y 2 + 1 N P j A P j , y 2 , S zz = - S y 2 + 1 N P j A P j , z 2 S xy = - S x S y + 1 N P j A P j , x P j , y , S xz = - S y S z + P j A P j , x P j , z , S yz = - S y S z + P j A P j , y P j , z and P j , x , P j , x , P j , z , are x , y , z coordinates of P j .
At step 735, using the covariance matrix S, eigenvalues λ1, λ2, λ3 123) and corresponding eigenvectors v1, v2, v3 are determined. For example, FIG. 8 shows an illustrative point cloud 800 filtered in accordance with an embodiment. As shown, point cloud 800 is filtered, as detailed above, such that for every point Pi in the point cloud, all points Pj 810-1 that belong to area (A) 805 around point Pi 810-2 are processed with eigenvalues 815-1, 815-2 and 815-3, and eigenvectors 820-1, 820-2 and 820-3, respectively. In this way, FIG. 8 illustrates a two-dimensional projection whereby v2 is perpendicular to the other respective eigenvectors (i.e., v1, and v3) At step 740, Gaussian filtration (a well-known technique) for point Pi 810-1 is performed in order to find position correction vector Fi as follows:
F i = P j A P i P j w ij j w ij , where w ij = exp ( P i P j · P i P j 2 σ 2 )
At step 745, filtering is performed only in a direction perpendicular to the surface so that in-plane point position are left intact. So a new correction Gi is found as follows:
G i=(F i ·v 3)v 3
At step 750, a determination is made as to whether the points in point cloud 800 in area A 805 represent a surface, and if so, at step 755, a surface detection function f(λ1, λ2, λ3) is applied such that: 0≤f(λ1, λ2, λ3)≤1, f(λ1, λ2, λ3)=1 when points in area A 805 represent a surface, and f(λ1, λ2, λ3)=0 when points in area A 805 do not represent surface. Illustratively, the surface detection function is given by:
f123)=1−(αλ32)2 capped to [0,1]
At step 760, a correction factor to Pi is applied to the extent allowed by f(λ1, λ2, λ3) to determine a corrected point P′i as follows:
P′ i =P i +f123)*G i
At step 765, the filtered point cloud data set results are output.
In accordance with the embodiment set forth in FIG. 7, for every point Pi in the point cloud this point's neighbors Pj within 4σ*4σ*4σ cube are found. Points are processed through points Pj (illustratively, the operations in FIG. 7 are conducted on a single pass through the collected data points of the point cloud) and a calculation of S and Fi is made. The overall complexity of these operations is given by the formula: O(N*Mavg), where N is the total number of points in the point cloud, and Mavg is the average number of points per query. Assuming the point cloud comprises mainly surfaces (e.g., terrain, buildings, etc.) with density given by: Density [points/m2], then the overall complexity is O(N*Density*σ2). As such, processing may become slower with the square of the sigma parameter setting (i.e., for deep filtration). It will also be noted that this is just the complexity of filtration itself, and the complexity of the point cloud database random access is different (O(N*log N)). As will be appreciated, the complexity of any filtration operation that uses a binary search tree (BST) cannot be less than the full enumeration of the BST which is (O(N*log N)), where N is the total number of points in the point cloud, and log N is the average complexity of BST access. In this way, (O(N*log N) represents the theoretical maximum for filtration speed.
As such, the complexity associated with the “precise” speed embodiment herein is equal to the complexity where calculating S and Fi is equal to O(N*Mavg), and Mavg is the average number of points per query (at steps 825 and 835, as detailed above). For example, if the point cloud mainly comprises surfaces (e.g., terrain, buildings, etc.) with a “Density” given by points/m2 and a total area A in m2, then the overall complexity is O(N*Density*σ2)=O(N22/A). As such the complexity may become O(N2) if the density is increasing at a constant area and this is significantly slower that the theoretical maximum O(N)*log N).
As noted above, processing speed may be varied, in accordance with an embodiment, by employing filter caching (i.e., fast speed) such that the filtering operations build several grids on top of the filter cache every time the cache is reloads. In this way, all the point calculations detailed above will use only these several grids such the overall complexity of the filtering operations does not depend upon the total number of points in the filter cache and the complexity approaches O(N). To reduce the filtration operational complexity and increase filtration speed for dense data sets the “fast” speed embodiment is used which processes points from the data set in batches. Each batch takes points that fall into the box and builds three (3) different grids on top of box (i.e., initial box plus 2a from all sides). Then all per point calculations use only these grids so that overall complexity does not depend on the number of points in filter cache and thus is close to O(N).
In the fast speed embodiment the 3 grids are defined as: big, small, and Gauss. Big and small grids are used to store principal component analysis, and the Gauss grid for Gauss filter calculations. Big and small grids have 9*9*9 cells, each cell represents a box of 4σ*4σ*4σ (i.e., big grid) or 2σ*2σ*2σ (i.e., small grid) centered around (x0+i*σ,y0+j*σ,y0+k*σ), where (x0,y0,y0) is a center of the filter cache, and i, j, k are integers ranging from −4 to +4. The boxes for grid cells are overlapping and when filter cache is loaded principal component analysis (i.e., calculate the S matrix, vectors and lambdas) is performed and with the storage of eigenvalues and eigenvectors separately for each cell(s) in these grids, with an illustrative example shown in FIG. 9. As shown, in FIG. 9 for point cloud 900 the three principal component grids have overlapping boxes (i.e., box 910, 920, and 930) with eigenvalues (i.e., eigenvalue 940 and 950) and eigenvectors (i.e., eigenvectors 960 and 970) stored from the principal component analysis.
The Gauss grid has 24*24*24 cells, ½σ*½σ*½σ each, no overlaps, and covers the whole 12σ*12σ*12σ of the filter cache. FIG. 10 shows an illustrative Gauss grid 1040 for a point cloud in accordance with an embodiment. Each cell (i.e., cell 1005, 1010, 1015, 1020) stores a number m of points of the point cloud (i.e., point 1030-1, 1030-2, 1030-3, 1030-N) that fall in this cell and their center mass (i.e., center mass 1050-1, 1050-2, 1050-3, and 1050-4), as shown in FIG. 10. Calculating the correction vector Fi, precedes as detailed above (see steps 735-740 in FIG. 7) but in this embodiment the centers of mass 1015-1, 1052-2, 1050-3, and 1050-4 from Gauss grid 10140 are utilized instead of actual points. As such, the Gaussian filter calculation always takes a number of steps defined by 7*7*7 regardless of the density and Sigma parameters. For example, the correction vector Fi calculation is given by:
F i = P j A P i P j w ij j w ij , where w ij = m · exp ( P i P j · P i P j 2 σ 2 )
In instances where determining a surface detection function f(λ1, λ2, λ3) for filtration is desired, the smallest of the aforementioned stored values for the small and big grid are used. In special cases when too few points are available in the grid cell for reliable analysis the following is applied: if the small grid cell has less than twelve (12) points then f(λ1, λ2, λ3) from the big grid is used; or if the big grid cell has less than twenty (20) points then f(λ1, λ2, λ3)=0, and no filtration occurs. As will be appreciated, the small grids help to detect smaller objects (e.g., objects that could be lost in the big grid if the rest of the surface is perfectly flat), and the big grids help with a big object with many points that can disturb a flat surface even from further away such that the small grid does not see the object at all. In accordance with the fast speed embodiment, the points in the input filter cache are enumerated one time to build the respective grids. A such, all grids are built for the full point cloud by enumerating around (12*12*12/(8*8*8))*N=3.375*N. Further, filtration also takes a fixed number of steps per point, and as mentioned above, O(N) defines the complexity of the filtration itself and the complexity of point cloud database random access is at least O(N*log N).
It should be noted that for clarity of explanation, the illustrative embodiments described herein may be presented as comprising individual functional blocks or combinations of functional blocks. The functions these blocks represent may be provided through the use of either dedicated or shared hardware, including, but not limited to, hardware capable of executing software. Illustrative embodiments may comprise digital signal processor (“DSP”) hardware and/or software performing the operation described herein. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams herein represent conceptual views of illustrative functions, operations and/or circuitry of the principles described in the various embodiments herein. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo code, program code and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer, machine or processor, whether or not such computer, machine or processor is explicitly shown. One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and that a high level representation of some of the components of such a computer is for illustrative purposes.
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.

Claims (15)

What is claimed is:
1. A method comprising:
receiving a point cloud data set, the data set comprising individual points Pi with each individual point Pi represented by a respective set of n floating-point coordinates;
for each individual point Pi of the point cloud data set:
translating the respective set of n floating-point coordinates to an origin;
encoding and shifting the respective set of n floating-point coordinates as unsigned integer coordinates;
scrambling the unsigned integer coordinates;
lexicographically sorting the scrambled unsigned integer coordinates; and
constructing, using the lexicographically sorted scrambled unsigned integer coordinates for each individual point Pi of the point cloud data set, a prefix tree representative of the point cloud data set.
2. The method of claim 1 further comprising:
outputting the prefix tree.
3. The method of claim 1 wherein the constructed prefix tree is a binary space partitioning (BSP) tree.
4. The method of claim 1 wherein the respective set of n floating-point coordinates are defined as:

{[x 0 float ,y 0 float ,z 0 float] . . . [x n float ,y n float ,z n float]}.
5. The method claim 3 wherein the point cloud data set is representative of a three-dimensional point cloud collected using a variety of sensors.
6. The method of claim 5 wherein the point cloud data set is generated from a Light Detection and Ranging (LIDAR) system.
7. The method 6 wherein the point cloud data set represents at least one surface which is a terrain surface.
8. The method of claim 1 further comprising:
performing a spatial query using the prefix tree.
9. The method of claim 8 wherein performing a spatial query using the prefix tree further comprises:
determining whether there is an intersection between a query region with an axis-aligned bounding box (AABB) of a root node from the prefix tree.
10. An apparatus for rending a three-dimensional point cloud data set:
a processor;
a memory to store computer program instructions, the computer program instructions when executed on the processor cause the processor to perform operations comprising:
receiving a point cloud data set, the data set comprising individual points Pi with each individual point Pi represented by a respective set of n floating-point coordinates;
for each individual point Pi of the point cloud data set:
translating the respective set of n floating-point coordinates to an origin;
encoding and shifting the respective set of n floating-point coordinates as unsigned integer coordinates;
scrambling the unsigned integer coordinates;
lexicographically sorting the scrambled unsigned integer coordinates; and
constructing, using the lexicographically sorted scrambled unsigned integer coordinates for each individual point Pi of the point cloud data set, a prefix tree representative of the point cloud data set.
11. The apparatus of claim 10 wherein the operations further comprise:
outputting the prefix tree.
12. The apparatus of claim 10 wherein the constructed prefix tree is a binary space partitioning (BSP) tree.
13. The apparatus of claim 10 wherein the operations further comprise:
performing a spatial query using the prefix tree.
14. The apparatus of claim 13 wherein performing a spatial query using the prefix tree further comprises:
determining whether there is an intersection between a query region with an axis-aligned bounding box (AABB) of a root node from the prefix tree.
15. The apparatus of claim 10 wherein the three-dimensional point cloud data set is representative of a three-dimensional point cloud collected using a variety of sensors.
US15/988,380 2017-07-12 2018-05-24 Point cloud data method and apparatus Active 2038-11-04 US10776111B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/988,380 US10776111B2 (en) 2017-07-12 2018-05-24 Point cloud data method and apparatus
EP18752895.5A EP3652706A2 (en) 2017-07-12 2018-07-09 Point cloud data method and apparatus
PCT/US2018/041181 WO2019014078A2 (en) 2017-07-12 2018-07-09 Point cloud data method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762531495P 2017-07-12 2017-07-12
US15/988,380 US10776111B2 (en) 2017-07-12 2018-05-24 Point cloud data method and apparatus

Publications (2)

Publication Number Publication Date
US20190018680A1 US20190018680A1 (en) 2019-01-17
US10776111B2 true US10776111B2 (en) 2020-09-15

Family

ID=64999638

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/988,380 Active 2038-11-04 US10776111B2 (en) 2017-07-12 2018-05-24 Point cloud data method and apparatus

Country Status (3)

Country Link
US (1) US10776111B2 (en)
EP (1) EP3652706A2 (en)
WO (1) WO2019014078A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220019221A1 (en) * 2018-08-03 2022-01-20 GM Global Technology Operations LLC Autonomous vehicle controlled based upon a lidar data segmentation system

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776111B2 (en) * 2017-07-12 2020-09-15 Topcon Positioning Systems, Inc. Point cloud data method and apparatus
US10424083B2 (en) * 2017-10-21 2019-09-24 Samsung Electronics Co., Ltd. Point cloud compression using hybrid transforms
WO2020189891A1 (en) * 2019-03-15 2020-09-24 엘지전자 주식회사 Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus, and point cloud data reception method
CN110211219A (en) * 2019-04-18 2019-09-06 广东满天星云信息技术有限公司 A kind of processing method of mass cloud data
EP3742398A1 (en) 2019-05-22 2020-11-25 Bentley Systems, Inc. Determining one or more scanner positions in a point cloud
CN112017202B (en) * 2019-05-28 2024-06-14 杭州海康威视数字技术股份有限公司 Point cloud labeling method, device and system
CN113906681B (en) * 2019-09-12 2022-10-18 深圳市大疆创新科技有限公司 Point cloud data encoding and decoding method, system and storage medium
EP3825730A1 (en) * 2019-11-21 2021-05-26 Bentley Systems, Incorporated Assigning each point of a point cloud to a scanner position of a plurality of different scanner positions in a point cloud
US11816798B1 (en) * 2020-03-17 2023-11-14 Apple Inc. 3D surface representation refinement
CN112514397A (en) * 2020-03-31 2021-03-16 深圳市大疆创新科技有限公司 Point cloud encoding and decoding method and device
CN111625093B (en) * 2020-05-19 2023-08-01 昆明埃舍尔科技有限公司 Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses
US11615556B2 (en) * 2020-06-03 2023-03-28 Tencent America LLC Context modeling of occupancy coding for point cloud coding
CN112068153B (en) * 2020-08-24 2022-07-29 电子科技大学 Crown clearance rate estimation method based on foundation laser radar point cloud

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608913B1 (en) 2000-07-17 2003-08-19 Inco Limited Self-contained mapping and positioning system utilizing point cloud data
US20050043916A1 (en) 2003-08-20 2005-02-24 Hon Hai Precision Industry Co., Ltd. Point cloud data importing system and method
US20050203930A1 (en) * 2004-03-10 2005-09-15 Bukowski Richard W. System and method for efficient storage and manipulation of extremely large amounts of scan data
US20090060345A1 (en) * 2007-08-30 2009-03-05 Leica Geosystems Ag Rapid, spatial-data viewing and manipulating including data partition and indexing
US7804498B1 (en) * 2004-09-15 2010-09-28 Lewis N Graham Visualization and storage algorithms associated with processing point cloud data
US20110115812A1 (en) 2009-11-13 2011-05-19 Harris Corporation Method for colorization of point cloud data based on radiometric imagery
US8290305B2 (en) 2009-02-13 2012-10-16 Harris Corporation Registration of 3D point cloud data to 2D electro-optical image data
US20120313944A1 (en) * 2011-06-08 2012-12-13 Pacific Data Images Llc Coherent out-of-core point-based global illumination
US20130202197A1 (en) * 2010-06-11 2013-08-08 Edmund Cochrane Reeler System and Method for Manipulating Data Having Spatial Co-ordinates
US20130235050A1 (en) 2012-03-09 2013-09-12 Nvidia Corporation Fully parallel construction of k-d trees, octrees, and quadtrees in a graphics processing unit
US20140198097A1 (en) * 2013-01-16 2014-07-17 Microsoft Corporation Continuous and dynamic level of detail for efficient point cloud object rendering
US20140229473A1 (en) * 2013-02-12 2014-08-14 Microsoft Corporation Determining documents that match a query
US20150125071A1 (en) * 2013-11-07 2015-05-07 Autodesk, Inc. Pre-segment point cloud data to run real-time shape extraction faster
US20160358371A1 (en) * 2012-03-07 2016-12-08 Willow Garage, Inc. Point cloud data hierarchy
US9633483B1 (en) * 2014-03-27 2017-04-25 Hrl Laboratories, Llc System for filtering, segmenting and recognizing objects in unconstrained environments
US20170123066A1 (en) 2011-12-21 2017-05-04 Robotic paradigm Systems LLC Apparatus, Systems and Methods for Point Cloud Generation and Constantly Tracking Position
US20170148210A1 (en) 2012-03-07 2017-05-25 Willow Garage, Inc. Point cloud data hierarchy
US20180122137A1 (en) * 2016-11-03 2018-05-03 Mitsubishi Electric Research Laboratories, Inc. Methods and Systems for Fast Resampling Method and Apparatus for Point Cloud Data
US20180323978A1 (en) * 2017-05-08 2018-11-08 International Business Machines Corporation Secure Distance Computations
US20190018730A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud filter method and apparatus
US20190018680A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud data method and apparatus
US20190310378A1 (en) * 2018-04-05 2019-10-10 Apex.AI, Inc. Efficient and scalable three-dimensional point cloud segmentation for navigation in autonomous vehicles
US20190319851A1 (en) * 2018-04-11 2019-10-17 Nvidia Corporation Fast multi-scale point cloud registration with a hierarchical gaussian mixture

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608913B1 (en) 2000-07-17 2003-08-19 Inco Limited Self-contained mapping and positioning system utilizing point cloud data
US20050043916A1 (en) 2003-08-20 2005-02-24 Hon Hai Precision Industry Co., Ltd. Point cloud data importing system and method
US20050203930A1 (en) * 2004-03-10 2005-09-15 Bukowski Richard W. System and method for efficient storage and manipulation of extremely large amounts of scan data
US7804498B1 (en) * 2004-09-15 2010-09-28 Lewis N Graham Visualization and storage algorithms associated with processing point cloud data
US20090060345A1 (en) * 2007-08-30 2009-03-05 Leica Geosystems Ag Rapid, spatial-data viewing and manipulating including data partition and indexing
US8290305B2 (en) 2009-02-13 2012-10-16 Harris Corporation Registration of 3D point cloud data to 2D electro-optical image data
US20110115812A1 (en) 2009-11-13 2011-05-19 Harris Corporation Method for colorization of point cloud data based on radiometric imagery
US20130202197A1 (en) * 2010-06-11 2013-08-08 Edmund Cochrane Reeler System and Method for Manipulating Data Having Spatial Co-ordinates
US20120313944A1 (en) * 2011-06-08 2012-12-13 Pacific Data Images Llc Coherent out-of-core point-based global illumination
US8780112B2 (en) * 2011-06-08 2014-07-15 Pacific Data Images Llc Coherent out-of-core point-based global illumination
US20170123066A1 (en) 2011-12-21 2017-05-04 Robotic paradigm Systems LLC Apparatus, Systems and Methods for Point Cloud Generation and Constantly Tracking Position
US20170148210A1 (en) 2012-03-07 2017-05-25 Willow Garage, Inc. Point cloud data hierarchy
US20160358371A1 (en) * 2012-03-07 2016-12-08 Willow Garage, Inc. Point cloud data hierarchy
US20130235050A1 (en) 2012-03-09 2013-09-12 Nvidia Corporation Fully parallel construction of k-d trees, octrees, and quadtrees in a graphics processing unit
US20140198097A1 (en) * 2013-01-16 2014-07-17 Microsoft Corporation Continuous and dynamic level of detail for efficient point cloud object rendering
US20140229473A1 (en) * 2013-02-12 2014-08-14 Microsoft Corporation Determining documents that match a query
US20150125071A1 (en) * 2013-11-07 2015-05-07 Autodesk, Inc. Pre-segment point cloud data to run real-time shape extraction faster
US9633483B1 (en) * 2014-03-27 2017-04-25 Hrl Laboratories, Llc System for filtering, segmenting and recognizing objects in unconstrained environments
US20180122137A1 (en) * 2016-11-03 2018-05-03 Mitsubishi Electric Research Laboratories, Inc. Methods and Systems for Fast Resampling Method and Apparatus for Point Cloud Data
US20180323978A1 (en) * 2017-05-08 2018-11-08 International Business Machines Corporation Secure Distance Computations
US20190018730A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud filter method and apparatus
US20190018680A1 (en) * 2017-07-12 2019-01-17 Topcon Positioning Systems, Inc. Point cloud data method and apparatus
US10474524B2 (en) * 2017-07-12 2019-11-12 Topcon Positioning Systems, Inc. Point cloud filter method and apparatus
US20190310378A1 (en) * 2018-04-05 2019-10-10 Apex.AI, Inc. Efficient and scalable three-dimensional point cloud segmentation for navigation in autonomous vehicles
US20190319851A1 (en) * 2018-04-11 2019-10-17 Nvidia Corporation Fast multi-scale point cloud registration with a hierarchical gaussian mixture

Non-Patent Citations (17)

* Cited by examiner, † Cited by third party
Title
Alexa et al., "Point Set Surfaces", Visualization, 2001, VIS '01 Proceedings, IEEE, pp. 21-537, XP031172865.
Davoodi et al., "Succinct Representations of Binary Trees for Range Minimum Queries", International Computing and Combinatorics Conference, 2012, pp. 396-407.
Devore et al., "Processing Terrain Point Cloud Data", SIAM Journal on Imaging Sciences, 2013, vol. 6., pp. 1-31, XP002786009.
DEVORE R ET AL: "Processing terrain point cloud data", SIAM JOURNAL ON IMAGING SCIENCES 20131001 SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS PUBLICATIONS USA, vol. 6, no. 1, 1 October 2013 (2013-10-01), XP002786009, DOI: 10.1137/110856009
International Search Report and Written Opinion dated Jan. 9, 2019, in connection with International Patent Application No. PCT/US2018/041181, 18 pgs.
International Search Report and Written Opinion dated Jan. 9, 2019, in connection with International Patent Application No. PCT/US2018/041183, 19 pgs.
Invitation to Pay Additional Fees, mailed on Nov. 7, 2018, in connection with International Patent Application No. PCT/US2018/041181.
Invitation to Pay Additional Fees, mailed on Nov. 7, 2018, in connection with International Patent Application No. PCT/US2018/041183.
LEAL NARVAEZ E A ET AL: "Point cloud denoising using robust principal component analysis", GRAPP 2006. FIRST INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS THEORY AND APPLICATIONS. PROCEEDINGS INSTITUTE FOR SYSTEMS AND TECHNOLOGIES OF INFORMATION, CONTROL AND COMMUNICATION PORTUGAL, 1 January 2006 (2006-01-01), pages 8 pp., XP002786117, ISBN: 972-8865-39-2
Leal Narvaez et al., "Point Cloud Denoising Using Robust Principal Component Analysis", GRAPP-Proceedings of the First International Conference on Computer Graphics Theory and Applications, 2006, pp. 1-8, XP002786117.
M. ALEXA ; J. BEHR ; D. COHEN-OR ; S. FLEISHMAN ; D. LEVIN ; C.T. SILVA: "Point set surfaces", VISUALIZATION, 2001. VIS '01. PROCEEDINGS, IEEE, PI, 1 January 2001 (2001-01-01), Pi, pages 21 - 537, XP031172865, ISBN: 978-0-7803-7201-6, DOI: 10.1109/VISUAL.2001.964489
SCHON B ET AL: "Octree-based indexing for 3D pointclouds within an oracle spatial DBMS", COMPUTERS & GEOSCIENCES, PERGAMON PRESS, OXFORD, GB, vol. 51, 1 February 2013 (2013-02-01), GB, pages 430 - 438, XP002786008, ISSN: 0098-3004, DOI: 10.1016/j.cageo.2012.08.021
Schon et al., "Octree-based Indexing for 3D Pointclouds within an Oracle Spatial DBMS", Computers & Geosciences 2013, vol. 51, pp. 430-438, XP002786008.
Valdivia et al., "Normal Correction Towards Smoothing Point-Based Surfaces", 2013 XXVI Conference on Graphics, Patterns and Images, 2013, pp. 187-194, XP032524052.
VALDIVIA PAOLA; CEDRIM DOUGLAS; PETRONETTO FABIANO; PAIVA AFONSO; NONATO LUIS GUSTAVO: "Normal Correction towards Smoothing Point-Based Surfaces", PROCEEDINGS - BRAZILIAN SYMPOSIUM ON COMPUTER GRAPHICS AND IMAGEPROCESSING, IEEE COMPUTER SOCIETY, LOS ALAMITOS, CA, US, 5 August 2013 (2013-08-05), US, pages 187 - 194, XP032524052, ISSN: 1530-1834, DOI: 10.1109/SIBGRAPI.2013.34
Yi, "The D-FCM Partitioned D-BSP Tree for Massive Point Cloud Data Access and Rendering", ISPRS Journal of Photogrammetry and Remote Sensing, 2016, vol. 120, pp. 25-36, XP029743900.
ZHANG YI: "The D-FCM partitioned D-BSP tree for massive point cloud data access and rendering", ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, AMSTERDAM [U.A.] : ELSEVIER, AMSTERDAM, NL, vol. 120, 24 August 2016 (2016-08-24), AMSTERDAM, NL, pages 25 - 36, XP029743900, ISSN: 0924-2716, DOI: 10.1016/j.isprsjprs.2016.08.002

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220019221A1 (en) * 2018-08-03 2022-01-20 GM Global Technology Operations LLC Autonomous vehicle controlled based upon a lidar data segmentation system
US11853061B2 (en) * 2018-08-03 2023-12-26 GM Global Technology Operations LLC Autonomous vehicle controlled based upon a lidar data segmentation system

Also Published As

Publication number Publication date
EP3652706A2 (en) 2020-05-20
US20190018680A1 (en) 2019-01-17
WO2019014078A3 (en) 2019-02-21
WO2019014078A2 (en) 2019-01-17

Similar Documents

Publication Publication Date Title
US10474524B2 (en) Point cloud filter method and apparatus
US10776111B2 (en) Point cloud data method and apparatus
Bauchet et al. Kinetic shape reconstruction
US10885703B2 (en) Point cloud preprocessing and rendering
US11188738B2 (en) System and method associated with progressive spatial analysis of prodigious 3D data including complex structures
US20170293662A1 (en) Querying spatial data in column stores using grid-order scans
US20150324399A1 (en) Querying Spatial Data in Column Stores Using Tree-Order Scans
US7561156B2 (en) Adaptive quadtree-based scalable surface rendering
US20160063754A1 (en) System and Method for Detecting a Structural Opening in a Three Dimensional Point Cloud
JP2019528500A (en) System and method for processing an input point cloud having points
CN107004256B (en) Method and apparatus for real-time adaptive filtering of noisy depth or parallax images
CN113129352B (en) Sparse light field reconstruction method and device
Li et al. kANN on the GPU with shifted sorting
US11860846B2 (en) Methods, systems and apparatus to improve spatial-temporal data management
US20230281350A1 (en) A Computer Implemented Method of Generating a Parametric Structural Design Model
US10937236B1 (en) Mesh smoothing for visual quality and analysis improvement
Labussière et al. Geometry preserving sampling method based on spectral decomposition for large-scale environments
Kuhn et al. Incremental division of very large point clouds for scalable 3d surface reconstruction
CN111105453A (en) Method for obtaining disparity map
Dubois et al. Highly Efficient Controlled Hierarchical Data Reduction techniques for Interactive Visualization of Massive Simulation Data.
Anand et al. Comparative run time analysis of LiDAR point cloud processing with GPU and CPU
Wu et al. Point cloud registration algorithm based on the volume constraint
Chen et al. A fast voxel-based method for outlier removal in laser measurement
Ageeli et al. Multivariate Probabilistic Range Queries for Scalable Interactive 3D Visualization
Menant et al. A comparison of stereo matching algorithms on multi-core Digital Signal Processor platform

Legal Events

Date Code Title Description
AS Assignment

Owner name: TOPCON POSITIONING SYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DATKO, DMITRY;CHARAMISINAU, IVAN;BURENKOV, MICHAEL;SIGNING DATES FROM 20180511 TO 20180518;REEL/FRAME:045896/0485

Owner name: TOPCON POSITIONING SYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DATKO, DMITRY;BURENKOV, MICHAEL;SIGNING DATES FROM 20170918 TO 20170919;REEL/FRAME:045896/0409

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4