CN114549738A - Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium - Google Patents

Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium Download PDF

Info

Publication number
CN114549738A
CN114549738A CN202210017511.9A CN202210017511A CN114549738A CN 114549738 A CN114549738 A CN 114549738A CN 202210017511 A CN202210017511 A CN 202210017511A CN 114549738 A CN114549738 A CN 114549738A
Authority
CN
China
Prior art keywords
point cloud
unmanned vehicle
map
dimensional
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210017511.9A
Other languages
Chinese (zh)
Inventor
何凯
俞成浦
邓方
孙健
文太平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Innovation Center of Beijing University of Technology filed Critical Chongqing Innovation Center of Beijing University of Technology
Priority to CN202210017511.9A priority Critical patent/CN114549738A/en
Publication of CN114549738A publication Critical patent/CN114549738A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a real-time dense point cloud reconstruction method, a real-time dense point cloud reconstruction system, real-time dense point cloud reconstruction equipment and a medium in an unmanned vehicle room, wherein the method comprises the following steps: the unmanned vehicle measures and detects distance through a laser radar and a TOF algorithm, and a depth camera acquires a depth image in real time; moving an unmanned vehicle by adopting a rapid exploration random tree algorithm to perform indoor exploration, and classifying depth images according to time to obtain a target image set; repeating the steps, calculating the obtained target images through an ORB-SLAM algorithm, obtaining pose information of the target images, and generating a three-dimensional sparse point cloud map; PCL point cloud registration is carried out according to the three-dimensional sparse point cloud map, and a three-dimensional dense point cloud map is generated; and optimizing the three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain the target three-dimensional dense point cloud map. The unmanned vehicle indoor autonomous exploration and three-dimensional point cloud dense mapping method realizes the parallel of the autonomous exploration and the three-dimensional point cloud dense mapping in the unmanned vehicle indoor, improves the timeliness, and can be applied to indoor emergency rescue and disaster relief.

Description

Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a method, a system, equipment and a medium for reconstructing real-time dense point cloud in an unmanned vehicle room.
Background
A traditional unmanned vehicle outdoor needs to rely on a satellite positioning system to obtain vehicle form positions for navigation and obstacle avoidance. Due to the fatigue treatment of indoor rooms, GPS positioning signals cannot be transmitted efficiently, and therefore the unmanned indoor vehicle cannot realize autonomous navigation and obstacle avoidance according to the GPS positioning signals. In the prior art, in order to realize indoor exploration of unmanned vehicles on the premise of lack of GPS positioning signals indoors, laser radar data is usually adopted to accurately sense three-dimensional information of surrounding environments, and two-dimensional mapping of indoor environments is realized.
However, the two-dimensional map only provides data, indoor three-dimensional environment information is difficult to describe, and workload and calculation difficulty of subsequent obstacle avoidance and path planning are greatly improved. In addition, the two-building map and the indoor exploration cannot be carried out simultaneously, the indoor environment can be explored only after the indoor two-building map is acquired, the timeliness is low, and the method is difficult to apply to the indoor emergency rescue and disaster relief environment.
Disclosure of Invention
In view of the above, it is necessary to provide a method, a system, an apparatus and a medium for reconstructing a dense point cloud in real time in an unmanned vehicle.
A real-time dense point cloud reconstruction method in an unmanned vehicle room comprises the following steps: the unmanned vehicle detects indoor three-dimensional information through a laser radar, adopts a TOF algorithm to measure distance, and acquires a depth image in real time through a depth camera; moving an unmanned vehicle by adopting a rapid exploration random tree algorithm to perform indoor exploration, and classifying the depth images according to moments to obtain a target image set, wherein the target image set comprises a plurality of target images at the same moment; repeating the steps, calculating the obtained target images through an ORB-SLAM algorithm, obtaining the pose information of the target images, and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information; PCL point cloud registration is carried out according to the three-dimensional sparse point cloud map, and an indoor three-dimensional dense point cloud map is generated; and optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
In one embodiment, the unmanned vehicle travels in the direction of the potential field while conducting indoor exploration.
In one embodiment, when the unmanned vehicle moves in the same direction at a constant speed, the pose information of the current frame is estimated according to the pose information and the speed of the previous frame.
In one embodiment, the moving unmanned vehicle using the fast search random tree algorithm for indoor search specifically includes: taking the initial position of the unmanned vehicle as a root node, randomly sampling surrounding scenes, taking sampling points as leaf nodes, and generating a random expansion tree according to the root node and the leaf nodes; if the leaf node is located in a target area, obtaining a path from an initial position to a target position, wherein the target area is an indoor barrier-free area; and repeating random sampling to obtain a plurality of leaf nodes, obtaining a planned route of the unmanned vehicle indoors according to the root nodes and the leaf nodes, and exploring indoors by the unmanned vehicle according to the planned route.
In one embodiment, the calculating the obtained target images by using the ORB-SLAM algorithm to obtain pose information of the target images, and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information specifically includes: carrying out ORB feature extraction and matching on the target image, estimating the 3D position of a matching point corresponding to the target image by triangulation, constructing an initial map according to the target image and the 3D position, optimizing a current frame target image by using a plurality of frames of images in a target image set, projecting the 3D position in the initial map to the current frame target image, determining a corresponding matching point, further optimizing the pose of the current frame target image by minimizing the reprojection error of the matching point, and acquiring optimized pose information and a key frame; selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into a map, verifying the added map points, generating new map points by using a trigonometry, performing local BA optimization on adjacent key frames and corresponding 3D points, eliminating redundant key frames, and obtaining a target key frame; calculating BOW scores of the target key frame and each common-view key frame, finding out closed-loop candidate frames from all intermediate key frames, detecting the closed-loop candidate frames through continuity, carrying out SIM3 optimization, searching feature matching according to an optimization result, repeating the optimization, receiving a closed loop when the feature matching meets a preset requirement, fixing the closed-loop frame and the current frame, carrying out global optimization, and obtaining a three-dimensional sparse point cloud map.
In one embodiment, the PCL point cloud registration according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map specifically includes: dividing the three-dimensional sparse point cloud map to form a plurality of point cloud clusters; calculating local feature descriptors and global feature descriptors of the point cloud clusters; performing feature level fusion on the local feature descriptors and the global feature descriptors of each point cloud cluster to obtain fusion feature vectors; calculating a corresponding relation by combining the similarity of the coordinate poses of the fused feature vectors in the three-dimensional sparse point cloud map and the features and positions between feature descriptions, and acquiring a plurality of corresponding point pairs; and performing rigid transformation according to the corresponding relation among the corresponding point pairs to complete registration and obtain the three-dimensional dense point cloud map.
A real-time dense point cloud reconstruction system in an unmanned vehicle room, comprising: the system comprises an unmanned vehicle, a laser radar and a depth camera which are carried on the unmanned vehicle, and a route planning module, a depth image classification module, an ORB-SLAM calculation module, a PCL point cloud registration module and a global optimization module which are arranged in the unmanned vehicle; the laser radar is used for ranging and obstacle avoidance to generate a two-dimensional scene, and the depth camera is used for acquiring a depth image of an indoor scene; the route planning module is used for planning a route by adopting a rapid exploration random tree algorithm, and the unmanned vehicle carries out indoor exploration according to the planned route; the depth image classification module is used for classifying the depth images according to moments to obtain a target image set, wherein the target image set comprises a plurality of target images at the same moment; the ORB-SLAM calculation module is used for calculating the obtained target images through an ORB-SLAM algorithm, obtaining pose information of the target images and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information; the PCL point cloud registration module is used for carrying out PCL point cloud registration according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map; the global optimization module is used for optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
In one embodiment, the laser radar comprises a TOF ranging module, and the TOF ranging module is used for acquiring the two-dimensional form of an indoor scene; the ORB-SLAM calculation module comprises a tracking unit, a mapping unit and a closed loop unit; the tracking unit is used for carrying out ORB feature extraction and matching on the target image to obtain optimized pose information and key frames in the target image; the map building unit is used for selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into a map, verifying the added map points, generating new map points by using a trigonometric method, performing local BA optimization on adjacent key frames and corresponding 3D points, eliminating redundant key frames, acquiring target key frames and sending the target key frames to the closed loop unit; and the closed-loop unit is used for calculating BOW scores of the target key frame and each common-view key frame, finding out closed-loop alternative frames from all intermediate key frames, detecting the closed-loop alternative frames through continuity, optimizing the SIM3, searching for feature matching according to an optimization result, repeatedly optimizing, receiving a closed loop when the feature matching meets a preset requirement, fixing the closed-loop frame and the current frame, performing global optimization, and obtaining a three-dimensional sparse point cloud map.
An apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of the method for reconstructing a dense point cloud in real time in an unmanned vehicle compartment as described in the various embodiments above.
A medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method for reconstructing a dense point cloud in real time in an unmanned vehicle compartment as described in the various embodiments above.
Compared with the prior art, the invention has the advantages and beneficial effects that:
1. according to the method, the laser radar is matched with the RRT algorithm to quickly explore the indoor unknown area, and the monocular camera vision ORB-SLAM algorithm is adopted, so that the autonomous exploration of the indoor environment of the unmanned vehicle and the parallel of the point cloud dense mapping of the three-dimensional environment are realized, the timeliness is improved, and the method can be better applied to the field of indoor emergency rescue and disaster relief.
2. The invention is suitable for exploring the dangerous and severe indoor environment, has better space exploration capability, and can reduce the influence of the complex problems of unmanned vehicle type, freedom degree constraint and the like.
3. The invention adopts the laser radar to accurately sense the three-dimensional information of the indoor environment, the detection precision can reach centimeter level, the laser radar can accurately identify the specific outline and distance of the obstacle, and the effective detection distance is long.
4. The invention adopts ORB-SLAM algorithm to reconstruct the three-dimensional sparse point cloud, has stronger environmental adaptability and higher robustness until the closed-loop detection and relocation of the wide baseline.
Drawings
FIG. 1 is a schematic flow chart of a method for reconstructing a dense point cloud in real time in an unmanned vehicle;
FIG. 2 is a schematic diagram of a system for reconstructing a dense point cloud in real time in an unmanned vehicle;
fig. 3 is a schematic diagram of the internal structure of the apparatus in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings by way of specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In one embodiment, as shown in fig. 1, there is provided a method for reconstructing a dense point cloud in real time in an unmanned vehicle room, comprising the following steps:
and S101, detecting indoor three-dimensional information by the unmanned vehicle through a laser radar, measuring distance by adopting a TOF algorithm, and acquiring a depth image in real time through a depth camera.
Specifically, when the unmanned vehicle enters an indoor environment, the laser radar is started simultaneously, the TOF (Time-of-Flight Time difference ranging) method is used for ranging through the laser radar, the laser radar emits laser beams to measure the distance of indoor objects, the more the laser beams are emitted, the more the perceived areas and details are, and the reflected laser is rotated to scan, so that the three-dimensional form of one area can be obtained.
And starting the laser radar, starting the depth camera, and acquiring an indoor original color image through the depth camera, so that indoor exploration and scene reconstruction can be performed synchronously in the follow-up process.
When the depth camera collects depth images in real time, the depth camera can move and rotate through the unmanned vehicle, so that indoor scenes can be shot comprehensively, and a plurality of depth images of the indoor scenes can be obtained. Certainly also can set up the degree of depth camera into on rotary platform, rotary platform sets up on unmanned car to be convenient for the degree of depth camera carries out the collection of depth image, also be convenient for simultaneously the thing people car indoor exploration.
And S102, moving the unmanned vehicle by adopting a fast exploration random tree algorithm to perform indoor exploration, classifying the depth images according to the time to obtain a target image set, wherein the target image set comprises a plurality of target images at the same time.
Specifically, when the unmanned vehicle conducts indoor exploration, the unmanned vehicle is moved by adopting a rapid exploration random tree algorithm, the path is rapidly expanded according to a random sampling mode, and the obtained path is more and more optimized along with the increase of random sampling points and iteration times, so that uniform and economic exploration can be realized, and the phenomenon that one area is excessively explored or a certain area is missed is avoided. The rapid-exploration Random Tree (RRT) is a Tree data storage structure and algorithm, is built by an incremental method, and quickly reduces the distance between a randomly selected point and the Tree, so as to effectively search a non-convex high-dimensional space.
And meanwhile, carrying out time alignment classification on the acquired depth images according to the moments to acquire a target image set, wherein the target image set comprises a plurality of target images at the same moment.
Wherein, unmanned aerial vehicle advances along the potential field direction when carrying out indoor exploration.
Specifically, the potential field method has a disadvantage that it is likely to fall into a local minimum value when performing space search, and therefore tends to advance in the potential field direction, thereby enabling better space search and reducing the influence of complicated problems such as unmanned vehicle type and degree of freedom constraint.
And S103, repeating the step S101 and the step S102, calculating the obtained target images through an ORB-SLAM algorithm, obtaining the pose information of the target images, and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information.
Specifically, in the process of indoor exploration of the unmanned vehicle, the depth camera acquires a plurality of depth images, repeatedly classifies the depth images to acquire a plurality of target image sets, calculates target images in the target image sets through an ORB-SLAM algorithm to acquire pose information corresponding to the target images, and generates a three-dimensional sparse point cloud map of an indoor scene according to the pose information and the target images.
The ORB-SLAM algorithm comprises four steps of tracking, mapping, repositioning and closing a loop. After the target image is input into an ORB-SLAM algorithm model, ORB characteristics of the target image are extracted through a tracking module, pose estimation or repositioning of the target image is carried out according to the ORB characteristics, local map tracking is carried out, key frames in the target image are judged, and the key frames are input into a mapping module; the map building module inserts the key frames into the initial map, eliminates redundant map points in the initial map, performs local optimization on the initial map, and eliminates redundant key frames; and then, detecting candidate loops through a loop detection module, carrying out SIM3 calculation, establishing loop constraint, and then carrying out global optimization on the initial map to obtain the three-dimensional sparse point cloud map.
Particularly, when the unmanned vehicle moves at a uniform speed in the same direction, the pose information of the current frame is estimated according to the pose information and the speed of the previous frame.
Specifically, the unmanned vehicle enters the room and moves at a constant speed under the guidance of the laser radar to search the room, and under the conditions that the moving speed and the moving direction are consistent and no large rotation exists, and a depth camera is carried to acquire an image, the pose of the current frame can be estimated by using the pose and the speed of the previous frame. And the speed of the previous frame can be obtained by calculating the pose information of the previous frames, and the visual odometer of the depth camera is formed by repeating the process.
And S104, performing PCL point cloud registration according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map.
Specifically, after a depth image acquired by a depth camera is acquired, a target image at the same time is screened, an ORB-SLAM algorithm is adopted to calculate the target image, a camera pose corresponding to the target image is acquired, a PCL (Point Cloud Library) Point Cloud Library is adopted to carry out Point Cloud splicing on the target image and the camera pose, registration is completed, and a three-dimensional dense Point Cloud map of an indoor scene is generated.
PCL point cloud registration is to find a transformation relation of two point set spaces through three-dimensional data point sets of different coordinate systems, so that the two point sets can be unified into the same coordinate system, and therefore the relative position and direction of a target image obtained independently are found in a global coordinate frame, and the intersection areas of a plurality of target images are completely overlapped.
And S105, optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
Specifically, a nonlinear global optimization algorithm, such as BA (Bundle Adjustment) optimization, is adopted, according to an acquired indoor three-dimensional dense point cloud map, three-dimensional coordinates of each pixel feature point in a world coordinate system are acquired, according to a camera model, pixel coordinates in a photo are re-projected, and re-projection errors of a plurality of pixel points and a plurality of feature points are minimized, so that an optimization effect is achieved, and a target three-dimensional dense point cloud map is acquired.
In the embodiment, the unmanned vehicle detects indoor three-dimensional information through a laser radar, adopts a TOF algorithm to measure distance, and acquires a depth image in real time through a depth camera; adopting a rapid exploration random number algorithm to move an unmanned vehicle for indoor exploration, classifying depth images according to time to obtain a target image set, wherein the target image set comprises a plurality of target images at the same time, repeating the steps, calculating the obtained target images through an ORB-SLAM algorithm to obtain the pose information of the target images, generating a three-dimensional sparse point cloud map according to a plurality of target images and corresponding pose information, generating an indoor three-dimensional dense point cloud map according to PCL point cloud registration of the three-dimensional sparse point cloud map, optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain the target three-dimensional dense point cloud map, therefore, the unmanned vehicle indoor environment autonomous exploration and the three-dimensional environment point cloud dense mapping parallel are realized, the timeliness is improved, and the unmanned vehicle indoor environment autonomous exploration and three-dimensional environment point cloud dense mapping parallel method can be better applied to the field of indoor emergency rescue and disaster relief.
Wherein, step S101 specifically includes: taking the initial position of the unmanned vehicle as a root node, randomly sampling surrounding scenes, taking sampling points as leaf nodes, and generating a random expansion tree according to the root node and the leaf nodes; if the leaf node is located in the target area, obtaining a path from the initial position to the target position, wherein the target area is an indoor barrier-free area; and repeating random sampling to obtain a plurality of leaf nodes, obtaining a planned route of the unmanned vehicle indoors according to the root node and the plurality of leaf nodes, and exploring the unmanned vehicle indoors according to the planned route.
Specifically, when an unmanned vehicle carries out indoor exploration by adopting a fast exploration random tree, the current initial position is used as a root node, then an indoor scene is randomly sampled, leaf nodes are added, a random expansion tree is generated according to the root node and the leaf nodes, whether the leaf nodes are located in a barrier-free target area is detected, if yes, a path from the initial position to any target position in the target area is formed, the unmanned vehicle drives to the target area from the initial position through the path and repeatedly carries out the random sampling, a plurality of leaf nodes are obtained, a planned route of the unmanned vehicle indoors is obtained according to the root node and the leaf nodes, the fast exploration is carried out indoors according to the planned route, the indoor autonomous route planning of the unmanned vehicle is realized, and the method can be applied to the field of indoor rescue and relief work.
Wherein, step S103 specifically includes: ORB feature extraction and matching are carried out on the target image, the 3D position of a matching point corresponding to the target image is estimated through triangulation, an initial map is built according to the target image and the 3D position, a current frame target image is optimized through a plurality of frames of images in a target image set, the 3D position in the initial map is projected to the current frame target image, the corresponding matching point is determined, the pose of the current frame target image is further optimized through minimizing the reprojection error of the matching point, and optimized pose information and a key frame are obtained; selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into an initial map, verifying the added map points, generating new map points by using a trigonometry method, performing local BA optimization on adjacent key frames and corresponding positions, eliminating redundant key frames and obtaining a target key frame; calculating BOW scores of the target key frames and each common-view key frame, finding out closed-loop alternative frames from all the target key frames, detecting the closed-loop alternative frames through continuity, carrying out SIM3 optimization, searching feature matching according to optimization, repeating the optimization, receiving a closed loop when the feature matching meets preset requirements, fixing the closed-loop frames and the current frame, carrying out global optimization, and obtaining a three-dimensional sparse point cloud map.
Specifically, a plurality of depth images at the same time are divided into the same target image set to be used as target images, ORB feature extraction and matching are carried out on the target images, a system is initialized to obtain depth camera motion parameters and 3D point clouds, the 3D positions of the landmark points are estimated in a triangularization mode according to the depth camera motion parameters, an initial map is built, and the pose information and the 3D positions of the target images are optimized through local BA optimization.
If the previous frame is successfully tracked, predicting the pose of the current frame through the same motion model, projecting the map points in the previous frame to the current frame, performing feature matching in a small range, and performing pose estimation when the matching points meet the preset number; and if the matching points do not meet the preset number, enlarging the search range of the current frame, performing feature matching, performing pose estimation through enough matching points, and determining a key frame in the target image.
The current frame is further optimized through a multi-view target image, 3D points in the local map are projected to the current frame, corresponding matching points are found, and the pose of the current frame is further optimized through minimizing the reprojection error of the matching points.
Selecting a key frame, inserting the key frame into an initial map, triangulating a new map point formed after the key frame is inserted, and optimizing the pose of the key frame in the map and the 3D position of the map point by using local optimization. And strictly rejecting redundant key frames and 3D points, for example, rejecting the key frame when more than 90% of the 3D points in the key frame can be observed by at least 3 other co-view key frames. The common-view key frame represents a plurality of key frames which can observe the same map point, and a common-view relationship exists among the plurality of key frames.
Establishing association between a current frame and a key frame, reducing long-time accumulated errors, correcting scale deviation through similar change, detecting a loop by using a BOW (bag-of-word) scene identification method, calculating SIM3 similar transformation, constructing an optimized map of an initial map, and performing global optimization on an essential view based on SIM3 transformation to obtain a three-dimensional sparse point cloud map.
The SIM3(similarity transformation) transformation represents solving similarity transformation using 3 pairs of matching points, and further solving a rotation matrix, a translation vector and a scale between two coordinate systems.
Wherein, step S104 specifically includes: dividing the three-dimensional sparse point cloud map to form a plurality of point cloud clusters; calculating local feature descriptors and global feature descriptors of a plurality of point cloud clusters; performing feature level fusion on the local feature descriptors and the global feature descriptors of each point cloud cluster to obtain fusion feature vectors; calculating a corresponding relation by combining the similarity of the coordinate poses of the fused feature vectors in the three-dimensional sparse point cloud map and the features and positions between feature descriptions, and acquiring a plurality of corresponding point pairs; and performing rigid transformation according to the corresponding relation among the corresponding point pairs, finishing registration and obtaining the three-dimensional dense point cloud map.
Specifically, after a three-dimensional sparse point cloud map of an indoor scene is obtained, the three-dimensional sparse point cloud map is segmented to form a plurality of point cloud clusters, and a local feature descriptor and a global feature descriptor of the plurality of point cloud clusters are calculated; and comparing the point cloud clusters pairwise, estimating the corresponding relation of the two point cloud clusters based on the similarity of the features and the positions of the two feature descriptors corresponding to the two point cloud clusters in the coordinate positions of the three-dimensional sparse point cloud map, and acquiring a preliminary estimated corresponding point pair. And if noise exists in the comparison process, removing wrong corresponding point pairs which have influence on the registration, estimating rigid transformation by using the corresponding relation of the remaining correct corresponding point pairs, and completely registering to obtain the three-dimensional dense point cloud map.
The corresponding estimation refers to searching for similar features to determine the overlapping part of data after two groups of feature vectors acquired by point cloud data are obtained, then registration can be performed, and PCL searches the corresponding relation between features by using different methods according to the types of the features, such as exhaustive registration, KD nearest neighbor query and the like.
As shown in fig. 2, there is provided a real-time dense point cloud reconstruction system 20 in an unmanned vehicle cabin, comprising: the system comprises an unmanned vehicle 21, a laser radar 22 and a depth camera 23 which are carried on the unmanned vehicle, and a route planning module 24, a depth image classification module 25, an ORB-SLAM calculation module 26, a PCL point cloud registration module 27 and a global optimization module 28 which are arranged in the unmanned vehicle 21, wherein:
the laser radar 22 is used for distance measurement and obstacle avoidance to generate a two-dimensional scene, and the depth camera 23 is used for acquiring a depth image of an indoor scene;
the route planning module 24 is used for planning a route by adopting a rapid exploration random tree algorithm, and the unmanned vehicle carries out indoor exploration according to the planned route;
the depth image classification module 25 is configured to classify the depth images according to time to obtain a target image set, where the target image set includes a plurality of target images at the same time;
the ORB-SLAM calculation module 26 is used for calculating the obtained target images through an ORB-SLAM algorithm, obtaining pose information of the target images and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information;
the PCL point cloud registration module 27 is used for carrying out PCL point cloud registration according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map;
and the global optimization module 28 is configured to optimize the indoor three-dimensional dense point cloud map by using a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
In one embodiment, the lidar 22 includes a TOF ranging module by which a two-dimensional representation of an indoor scene is acquired; the ORB-SLAM calculation module 26 comprises a tracking unit, a mapping unit and a closed loop unit, wherein the tracking unit, the mapping unit and the closed loop unit are connected with one another; the tracking unit 261 is configured to perform ORB feature extraction and matching on the target image to obtain optimized pose information and a key frame in the target image; the map building unit is used for selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into a map, verifying the added map points, generating new map points by using a trigonometry method, performing local BA optimization on adjacent key frames and corresponding 3D points, eliminating redundant key frames, obtaining target key frames and sending the target key frames to the closed-loop unit; and the closed-loop unit is used for calculating the BOW scores of the target key frame and each common-view key frame, finding out closed-loop alternative frames from all the intermediate key frames, detecting closed-loop candidate frames through continuity, optimizing the SIM3, searching feature matching according to an optimization result, repeatedly optimizing, receiving a closed loop when the feature matching meets a preset requirement, fixing the closed-loop frames and the current frame, performing global optimization, and obtaining a three-dimensional sparse point cloud map.
In one embodiment, the route planning module 24 is specifically configured to: taking the initial position of the unmanned vehicle as a root node, randomly sampling surrounding scenes, taking sampling points as leaf nodes, and generating a random expansion tree according to the root node and the leaf nodes; if the leaf node is located in the target area, obtaining a path from the initial position to the target position, wherein the target area is an indoor barrier-free area; and repeating random sampling to obtain a plurality of leaf nodes, obtaining a planned route of the unmanned vehicle indoors according to the root node and the plurality of leaf nodes, and exploring the unmanned vehicle indoors according to the planned route.
In one embodiment, PCL point cloud registration module 27 is specifically configured to: dividing the three-dimensional sparse point cloud map to form a plurality of point cloud clusters; calculating local feature descriptors and global feature descriptors of a plurality of point cloud clusters; performing feature level fusion on the local feature descriptors and the global feature descriptors of each point cloud cluster to obtain fusion feature vectors; calculating a corresponding relation by combining the similarity of the coordinate poses of the fused feature vectors in the three-dimensional sparse point cloud map and the features and positions between feature descriptions, and acquiring a plurality of corresponding point pairs; and performing rigid transformation according to the corresponding relation among the corresponding point pairs, finishing registration and obtaining the three-dimensional dense point cloud map.
In one embodiment, a device is provided, which may be a server, the internal structure of which may be as shown in fig. 3. The device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the device is configured to provide computing and control capabilities. The memory of the device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the device is used for storing configuration templates and also can be used for storing target webpage data. The network interface of the device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for reconstructing a dense point cloud in real time in an unmanned vehicle.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the devices to which the present application may be applied, and that a particular device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a medium may also be provided, the medium storing a computer program comprising program instructions which, when executed by a computer, which may be part of the above-mentioned real-time dense point cloud reconstruction system within an unmanned vehicle compartment, cause the computer to perform the method according to the preceding embodiment.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It will be apparent to those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented in program code executable by a computing device, such that they may be stored on a computer storage medium (ROM/RAM, magnetic disks, optical disks) and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The foregoing is a more detailed description of the present invention that is presented in conjunction with specific embodiments, and the practice of the invention is not to be considered limited to those descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A real-time dense point cloud reconstruction method in an unmanned vehicle room is characterized by comprising the following steps:
the unmanned vehicle detects indoor three-dimensional information through a laser radar, adopts a TOF algorithm to measure distance, and acquires a depth image in real time through a depth camera;
moving an unmanned vehicle by adopting a rapid exploration random tree algorithm to perform indoor exploration, and classifying the depth images according to moments to obtain a target image set, wherein the target image set comprises a plurality of target images at the same moment;
repeating the steps, calculating the obtained target images through an ORB-SLAM algorithm, obtaining the pose information of the target images, and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information;
PCL point cloud registration is carried out according to the three-dimensional sparse point cloud map, and an indoor three-dimensional dense point cloud map is generated;
and optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
2. The method for reconstructing the real-time dense point cloud in the unmanned vehicle room of claim 1, wherein the unmanned vehicle advances along a potential field direction when performing indoor exploration.
3. The method for reconstructing the dense point cloud in real time in the unmanned vehicle room as claimed in claim 1, wherein when the unmanned vehicle moves in the same direction and at a constant speed, the pose information of the current frame is estimated according to the pose information and the speed of the previous frame.
4. The method for reconstructing the real-time dense point cloud in the unmanned vehicle room according to claim 1, wherein the moving unmanned vehicle using the fast search random tree algorithm for indoor search specifically comprises:
taking the initial position of the unmanned vehicle as a root node, randomly sampling surrounding scenes, taking sampling points as leaf nodes, and generating a random expansion tree according to the root node and the leaf nodes;
if the leaf node is located in a target area, obtaining a path from an initial position to a target position, wherein the target area is an indoor barrier-free area;
and repeating random sampling to obtain a plurality of leaf nodes, obtaining a planned route of the unmanned vehicle indoors according to the root nodes and the leaf nodes, and exploring indoors by the unmanned vehicle according to the planned route.
5. The method for reconstructing the dense point cloud in real time in the unmanned vehicle compartment according to claim 1, wherein the step of calculating the obtained target images by an ORB-SLAM algorithm to obtain pose information of the target images and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information comprises the steps of:
carrying out ORB feature extraction and matching on the target image, estimating the 3D position of a matching point corresponding to the target image by triangulation, constructing an initial map according to the target image and the 3D position, optimizing a current frame target image by using a plurality of frames of images in a target image set, projecting the 3D position in the initial map to the current frame target image, determining a corresponding matching point, further optimizing the pose of the current frame target image by minimizing the reprojection error of the matching point, and acquiring optimized pose information and a key frame;
selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into a map, verifying the added map points, generating new map points by using a trigonometry method, performing local BA optimization on adjacent key frames and corresponding 3D points, eliminating redundant key frames, and acquiring a target key frame;
calculating BOW scores of the target key frame and each common-view key frame, finding out closed-loop candidate frames from all intermediate key frames, detecting the closed-loop candidate frames through continuity, carrying out SIM3 optimization, searching feature matching according to an optimization result, repeating the optimization, receiving a closed loop when the feature matching meets a preset requirement, fixing the closed-loop frame and the current frame, carrying out global optimization, and obtaining a three-dimensional sparse point cloud map.
6. The method for reconstructing the dense point cloud in the unmanned vehicle room in real time according to claim 1, wherein the PCL point cloud registration is performed according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map, and specifically comprises:
dividing the three-dimensional sparse point cloud map to form a plurality of point cloud clusters;
calculating local feature descriptors and global feature descriptors of the point cloud clusters;
performing feature level fusion on the local feature descriptors and the global feature descriptors of each point cloud cluster to obtain fusion feature vectors;
calculating a corresponding relation by combining the similarity of the coordinate poses of the fused feature vectors in the three-dimensional sparse point cloud map and the features and positions between feature descriptions, and acquiring a plurality of corresponding point pairs;
and carrying out rigid transformation according to the corresponding relation among the corresponding point pairs, finishing registration and obtaining the three-dimensional dense point cloud map.
7. The utility model provides an indoor real-time dense point cloud of unmanned aerial vehicle rebuilds system which characterized in that specifically includes: the system comprises an unmanned vehicle, a laser radar and a depth camera which are carried on the unmanned vehicle, and a route planning module, a depth image classification module, an ORB-SLAM calculation module, a PCL point cloud registration module and a global optimization module which are arranged in the unmanned vehicle;
the laser radar is used for ranging and obstacle avoidance to generate a two-dimensional scene, and the depth camera is used for acquiring a depth image of an indoor scene;
the route planning module is used for planning a route by adopting a rapid exploration random tree algorithm, and the unmanned vehicle carries out indoor exploration according to the planned route;
the depth image classification module is used for classifying the depth images according to moments to obtain a target image set, wherein the target image set comprises a plurality of target images at the same moment;
the ORB-SLAM calculation module is used for calculating the obtained target images through an ORB-SLAM algorithm, obtaining pose information of the target images and generating a three-dimensional sparse point cloud map according to the target images and the corresponding pose information;
the PCL point cloud registration module is used for carrying out PCL point cloud registration according to the three-dimensional sparse point cloud map to generate an indoor three-dimensional dense point cloud map;
the global optimization module is used for optimizing the indoor three-dimensional dense point cloud map by adopting a nonlinear global optimization algorithm to obtain a target three-dimensional dense point cloud map.
8. The unmanned aerial vehicle indoor real-time dense point cloud reconstruction system of claim 7,
the laser radar comprises a TOF ranging module, and the two-dimensional form of an indoor scene is obtained through the TOF ranging module;
the ORB-SLAM calculation module comprises a tracking unit, an image building unit and a closed loop unit, wherein the tracking unit, the image building unit and the closed loop unit are connected with each other;
the tracking unit is used for carrying out ORB feature extraction and matching on the target image to obtain optimized pose information and key frames in the target image;
the map building unit is used for selecting a key frame, calculating the BOW relation of the feature points of the key frame, updating the connection relation of the key frame according to the BOW relation, inserting the key frame into a map, verifying the added map points, generating new map points by using a trigonometric method, performing local BA optimization on adjacent key frames and corresponding 3D points, eliminating redundant key frames, obtaining target key frames and sending the target key frames to the closed loop unit;
and the closed-loop unit is used for calculating BOW scores of the target key frame and each common-view key frame, finding out closed-loop alternative frames from all intermediate key frames, detecting the closed-loop alternative frames through continuity, optimizing the SIM3, searching for feature matching according to an optimization result, repeatedly optimizing, receiving a closed loop when the feature matching meets a preset requirement, fixing the closed-loop frame and the current frame, performing global optimization, and obtaining a three-dimensional sparse point cloud map.
9. An apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A medium on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202210017511.9A 2022-01-07 2022-01-07 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium Pending CN114549738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210017511.9A CN114549738A (en) 2022-01-07 2022-01-07 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210017511.9A CN114549738A (en) 2022-01-07 2022-01-07 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium

Publications (1)

Publication Number Publication Date
CN114549738A true CN114549738A (en) 2022-05-27

Family

ID=81669422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210017511.9A Pending CN114549738A (en) 2022-01-07 2022-01-07 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN114549738A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN116148883A (en) * 2023-04-11 2023-05-23 锐驰智慧科技(深圳)有限公司 SLAM method, device, terminal equipment and medium based on sparse depth image
CN116858215A (en) * 2023-09-05 2023-10-10 武汉大学 AR navigation map generation method and device
CN117372632A (en) * 2023-12-08 2024-01-09 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium
CN117607837A (en) * 2024-01-09 2024-02-27 苏州识光芯科技术有限公司 Sensor array, distance measuring device and method
CN117607837B (en) * 2024-01-09 2024-04-16 苏州识光芯科技术有限公司 Sensor array, distance measuring device and method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN116148883A (en) * 2023-04-11 2023-05-23 锐驰智慧科技(深圳)有限公司 SLAM method, device, terminal equipment and medium based on sparse depth image
CN116148883B (en) * 2023-04-11 2023-08-08 锐驰智慧科技(安吉)有限公司 SLAM method, device, terminal equipment and medium based on sparse depth image
CN116858215A (en) * 2023-09-05 2023-10-10 武汉大学 AR navigation map generation method and device
CN116858215B (en) * 2023-09-05 2023-12-05 武汉大学 AR navigation map generation method and device
CN117372632A (en) * 2023-12-08 2024-01-09 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium
CN117372632B (en) * 2023-12-08 2024-04-19 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium
CN117607837A (en) * 2024-01-09 2024-02-27 苏州识光芯科技术有限公司 Sensor array, distance measuring device and method
CN117607837B (en) * 2024-01-09 2024-04-16 苏州识光芯科技术有限公司 Sensor array, distance measuring device and method

Similar Documents

Publication Publication Date Title
US20230130320A1 (en) Laser scanner with real-time, online ego-motion estimation
EP3427008B1 (en) Laser scanner with real-time, online ego-motion estimation
US11176701B2 (en) Position estimation system and position estimation method
US7446766B2 (en) Multidimensional evidence grids and system and methods for applying same
EP3526626A1 (en) Laser scanner with real-time, online ego-motion estimation
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
US20190025411A1 (en) Laser scanning system, laser scanning method, movable laser scanning system, and program
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
Postica et al. Robust moving objects detection in lidar data exploiting visual cues
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
US20210304518A1 (en) Method and system for generating an environment model for positioning
He et al. Online semantic-assisted topological map building with LiDAR in large-scale outdoor environments: Toward robust place recognition
Dubois et al. Dense Decentralized Multi-robot SLAM based on locally consistent TSDF submaps
CN114577196A (en) Lidar positioning using optical flow
Gonzalez et al. TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM
Shoukat et al. Cognitive robotics: Deep learning approaches for trajectory and motion control in complex environment
Gautam et al. An experimental comparison of visual SLAM systems
Opromolla et al. PCA-based line detection from range data for mapping and localization-aiding of UAVs
Baligh Jahromi et al. Layout slam with model based loop closure for 3d indoor corridor reconstruction
Alkhatib et al. Camera pose estimation based on structure from motion
Pang et al. FLAME: Feature-likelihood based mapping and localization for autonomous vehicles
Nawaf et al. Guided underwater survey using semi-global visual odometry
Vidal et al. Environment modeling for cooperative aerial/ground robotic systems
Jametoni et al. A Study on Autonomous Drone Positioning Method
Yang et al. Multi-sensor fusion of sparse point clouds based on neuralnet works

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination