CN114923477A - Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology - Google Patents

Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology Download PDF

Info

Publication number
CN114923477A
CN114923477A CN202210559495.6A CN202210559495A CN114923477A CN 114923477 A CN114923477 A CN 114923477A CN 202210559495 A CN202210559495 A CN 202210559495A CN 114923477 A CN114923477 A CN 114923477A
Authority
CN
China
Prior art keywords
map
aerial vehicle
unmanned aerial
information
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210559495.6A
Other languages
Chinese (zh)
Inventor
吴宇辰
吴红兰
孙有朝
吴振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202210559495.6A priority Critical patent/CN114923477A/en
Publication of CN114923477A publication Critical patent/CN114923477A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data

Abstract

The invention discloses a multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology. The unmanned aerial vehicle comprises an integral physical framework, an airborne embedded computer and an image sensor; the whole physical framework comprises a rack, four rotors and a motor; the airborne embedded computer comprises a communication module and an embedded processor, and the image sensor comprises a binocular camera and an RGB-D camera. The unmanned vehicle comprises an embedded processor, a laser radar sensor and a communication module. The ground station includes a display control, a control, and a communication control. The method utilizes a ground station to output control and map building instructions, and fuses a 3D point cloud map constructed by the unmanned aerial vehicle and a 2D plane grid map constructed by the unmanned aerial vehicle through an SLAM technology. According to the invention, the environmental information is acquired through remote control, so that the safety of related operation is improved, positioning errors caused by weak GPS signals and other factors are eliminated, and the accuracy of map construction is improved.

Description

Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
Technical Field
The invention belongs to the technical field of computer vision and laser SLAM, and particularly relates to an air-ground cooperative mapping system and method based on vision and laser SLAM technology.
Background
The traditional environmental information acquisition is realized through manual field investigation, so that the labor intensity is high, the measurement efficiency is low, the measurement precision and the real-time performance of data monitoring cannot be guaranteed, uncertainty and danger are usually accompanied in the complex environment, and therefore, the environment sensing is very necessary through unmanned systems such as robots.
Unmanned aerial vehicle has the advantage of man-machine separation, can be comparatively safe carry out remote exploration, but some utilize satellite positioning's such as GPS mode can have the possibility of losing the antithetical couplet in the environment of position.
The SLAM technology for synchronous positioning and mapping is a basis and key for solving various problems of exploration, investigation, navigation and the like of a mobile robot in an unknown environment, the robot acquires observation information according to a sensor carried by the robot to create an environment map, and meanwhile, the pose of the robot is estimated according to a part of created maps. Due to the complex environment, maps built by using SLAM only by one sensor have respective defects, and the obtained map dimension has the limitation of visual field, so that the overall environmental characteristics cannot be accurately described.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems of the prior art, the invention provides a multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology, which improve the accuracy of map information acquisition and map building; on the premise of ensuring good human-computer interaction, the safety of related operations is improved through remote control.
The technical scheme is as follows: the invention discloses a multi-dimensional space-ground collaborative map building system based on vision and laser SLAM technology, which comprises a ground station, an unmanned aerial vehicle and an unmanned vehicle. The ground station module comprises a display control, a control and a communication control. The unmanned aerial vehicle module comprises an unmanned aerial vehicle overall framework, an airborne embedded computer and an image sensor, wherein the unmanned aerial vehicle framework comprises a rack, four rotors and a motor; the airborne embedded computer comprises a communication module and an embedded processor, and the image sensor comprises a binocular camera and an RGB-D camera. The unmanned vehicle module comprises an embedded processor, a laser radar sensor and a communication module.
The method comprises the steps that a ground station is used as a center, an image sensor of an unmanned aerial vehicle is used for obtaining visual information to construct a 3D point cloud map, and a laser sensor of the unmanned aerial vehicle is used for obtaining plane information to construct a 2D plane grid map; and displaying the 3D point cloud map and the 2D plane grid map information on the ground station in real time by using a control and map building instruction output by the ground station, and completing the fusion of the map information.
Preferably, the onboard embedded computer carried by the unmanned aerial vehicle adopts an intedson Xavier NX processor, the wireless communication module carried by the onboard embedded computer is an ESP8266, the binocular camera in the visual image sensor adopts an intel T265 camera, and the RGB-D camera adopts an intel D435i camera.
Preferably, the unmanned vehicle adopts a Raspberry Pi 4B embedded processor and a Silaran technology RPLIDAR-A1 laser radar sensor.
The invention adopts a multi-dimensional space-ground collaborative map building method based on vision and laser SLAM technology, which comprises the following steps:
1. data acquisition: the ground station outputs a control and map building instruction to the unmanned aerial vehicle and the unmanned aerial vehicle through the local area network, the unmanned aerial vehicle is provided with an RGB-D camera and a binocular camera, the RGB-D camera is used for obtaining monocular images and depth information outdoors, and the binocular camera is used for obtaining left and right eye images indoors; the unmanned vehicle acquires the information of the plane by using a single-line laser radar;
2. data processing: the unmanned aerial vehicle transmits image information to an onboard embedded computer after acquiring a visual image, extracts characteristic points of the image and acquires a depth value; the unmanned vehicle obtains the pulse number in the rotation process of the motor by using the encoder, divides the total pulse number by the standard pulse number of each circle rotated by the motor to obtain the number of rotation circles of the motor, and calculates the number of rotation circles by combining the size of the wheel of the unmanned vehicle to obtain the odometer information.
3. Visual SLAM and laser SLAM are mapped: the unmanned aerial vehicle transmits image information to the airborne embedded computer, finds a certain pixel p in the image, and sets the brightness of the certain pixel p as I p Selecting a brightness threshold T according to the brightness of the picture; around p as the center of circleSelecting 16 pixel points on a circle with the radius of 3 pixels; if the brightness of N continuous pixel points is I p -T and I p If the + T range is out, p is regarded as a key point; and acquiring the depth value of each obtained feature point by using the depth map of the RGB-D camera. After the characteristic points and the depth values are obtained, a visual SLAM mapping algorithm is operated to construct a 3D point cloud map; the unmanned vehicle transmits radar data and moving mileage counting data to the embedded processor, and a ground target map is built to obtain a 2D plane grid map;
4. map fusion: and the ground station obtains a 3D point cloud map of the unmanned aerial vehicle and a 2D plane grid map of the unmanned aerial vehicle, then performs feature correspondence, and finds out corresponding feature corresponding vectors. Firstly, determining edge corner points and central points of two maps and an actual scene pair, wherein the edge corner point of the upper left corner of a point cloud map is O 1 Center point is P 1 Marking the edge corner point O of the upper left corner of the grid map 2 Center point is P 2 . Respectively obtain corresponding vectors of the two as
Figure BDA0003651798120000021
Will P 1 And P 2 Overlapping to establish a two-dimensional coordinate system, recording O 1 The coordinates are (x) 1 ,y 1 ),O 2 The coordinate is (x) 2 ,y 2 ) Calculating a vector
Figure BDA0003651798120000022
Angle between theta and the vector length
Figure BDA0003651798120000023
Calculating the zoom scale u of the 2D plane grid map according to the pixel proportion of the 3D point cloud map and the 2D plane grid map, and calculating the stretching of the 2D plane grid map in the x and y directions according to the vector length
Figure BDA0003651798120000024
And after the calculation is finished, the image position of the 2D grid map is adjusted to be superposed with the 3D point cloud map by taking the section of the 3D point cloud map as a reference, so that map fusion and multi-dimensional presentation are realized.
Has the beneficial effects that: compared with the prior art, the invention has the following remarkable advantages: 1. the multidimensional space-ground cooperative map building method has the advantages that the landform can be presented more visually and comprehensively, and the accuracy of the map is improved; 2. the safety of exploring unknown environment is improved by remote control of the ground station; 3. and the positioning error caused by weak GPS signals is eliminated by transmitting information through the sensor.
Drawings
FIG. 1 is a block diagram of the overall architecture of the system of the present invention;
FIG. 2 is a block diagram of the structure of the unmanned aerial vehicle according to the present invention;
FIG. 3 is a block diagram of the flow of unmanned-side vision SLAM in the present invention;
FIG. 4 is a block diagram of the unmanned vehicle of the present invention;
FIG. 5 is a flow chart of laser SLAM at the unmanned vehicle end in the invention;
FIG. 6 is a block diagram of the operation of the system of the present invention;
FIG. 7 is a three-dimensional point cloud map of the present invention;
FIG. 8 is a two-dimensional laser grid map of the present invention;
FIG. 9 is a multi-dimensional fusion map in the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in FIG. 1, the invention provides a multi-dimensional space-ground cooperative mapping system based on vision and laser SLAM technology, which comprises a ground station, an unmanned aerial vehicle and an unmanned vehicle.
The ground station includes a display control, a control, and a communication control.
The unmanned aerial vehicle comprises an unmanned aerial vehicle overall physical framework, an airborne embedded computer and an image sensor. The whole physical architecture comprises a rack, four rotors and a motor, an ESP8266 communication module and an England Jetson Xavier NX embedded processor are arranged in an onboard embedded computer, and an image sensor comprises an Intel T265 camera and an Intel D435i camera.
The unmanned vehicle comprises a Raspberry Pi 4B embedded processor, a Sillan technology RPLIDAR-A1 laser radar sensor and a communication module.
As shown in fig. 6, the ground station sends control and map building instructions to the unmanned aerial vehicle and the unmanned vehicle through the distributed communication control. The unmanned aerial vehicle carries out 3D point cloud map construction through the visual image sensor after receiving the instruction, the unmanned aerial vehicle carries out 2D plane grid map construction through the laser radar sensor after receiving the instruction, and the unmanned aerial vehicle and the ground station are used for transmitting map information to the ground station through the distributed communication control carried by the unmanned aerial vehicle after the map construction.
As shown in fig. 2 and 3, after the unmanned aerial vehicle receives the control and map building instruction from the ground station, the onboard embedded computer outputs control information, drives the motor to move through the telex drive system in the overall physical architecture, and starts to execute the map building instruction after flying to a specified proper height. The onboard embedded computer firstly drives the visual image sensor to acquire information such as related images, depth and the like, traverses threads such as visual odometer tracking, local mapping, loop detection and the like, constructs a 3D point cloud map by using a visual mapping algorithm in the onboard embedded computer through global optimization of BA, and transmits map information to the ground station, as shown in FIG. 7.
As shown in fig. 4 and 5, after the unmanned vehicle receives a control and map building instruction of the ground station, the embedded processor drives the laser radar sensor to obtain relevant laser data and odometer data, and a 2D plane grid map is built in the embedded processor. And then continuously updating the map by establishing a motion model and particle filtering, and finally transmitting the map information to the ground station, as shown in fig. 8.
The invention provides a multi-dimensional space-ground collaborative map building method based on vision and laser SLAM technology, which comprises the following steps:
1. data acquisition: the ground station outputs a control and map building instruction to the unmanned aerial vehicle and the unmanned aerial vehicle through the local area network, the unmanned aerial vehicle is provided with an RGB-D camera and a binocular camera, the RGB-D camera is used for obtaining monocular images and depth information outdoors, and the binocular camera is used for obtaining left and right eye images indoors; the unmanned vehicle acquires the information of the plane by using a single-line laser radar;
2. data processing: the unmanned aerial vehicle transmits image information to an onboard embedded computer after acquiring a visual image, extracts characteristic points of the image and acquires depth values; the unmanned vehicle acquires the pulse number in the rotation process of the motor by using the encoder, divides the total pulse number by the standard pulse number of each rotation of the motor to obtain the number of rotation turns of the motor, and calculates the number of rotation turns by combining the size of the wheel of the unmanned vehicle to obtain the odometer information;
3. visual SLAM and laser SLAM mapping: the unmanned aerial vehicle transmits the image information to an airborne embedded computer, finds a certain pixel p in the image, and sets the brightness of the certain pixel p as I p Selecting a brightness threshold T according to the brightness of the picture; selecting 16 pixel points on a circle with the radius of 3 pixels around the circle by taking p as the center of a circle; if the brightness of N continuous pixel points is I p -T and I p If the + T range is out, p is regarded as a key point; and acquiring the depth value of each obtained feature point by using the depth map of the RGB-D camera. After the characteristic points and the depth values are obtained, a visual SLAM mapping algorithm is operated to construct a 3D point cloud map; the unmanned vehicle transmits radar data and moving mileage counting data to the embedded processor, and a ground target map is built to obtain a 2D plane grid map;
4. map fusion: and the ground station obtains a 3D point cloud map of the unmanned aerial vehicle and a 2D plane grid map of the unmanned aerial vehicle, then performs feature correspondence, and finds out a corresponding feature corresponding vector. Firstly, determining edge corner points and central points of two maps and an actual scene pair, wherein the edge corner point of the upper left corner of a point cloud map is O 1 Center point is P 1 Marking the edge corner point O of the upper left corner of the grid map 2 Center point is P 2 . Respectively obtain corresponding vectors of the two as
Figure BDA0003651798120000041
Will P 1 And P 2 Overlapping to establish a two-dimensional coordinate system, recording O 1 The coordinate is (x) 1 ,y 1 ),O 2 The coordinates are (x) 2 ,y 2 ) Calculating a vector
Figure BDA0003651798120000042
Angle between theta and the vector length
Figure BDA0003651798120000043
Calculating the zoom scale u of the 2D plane grid map according to the pixel proportion of the 3D point cloud map and the 2D plane grid map, and calculating the stretching of the 2D plane grid map in the x and y directions according to the vector length
Figure BDA0003651798120000051
After the calculation is completed, the image position of the 2D grid map is adjusted to be superposed with the 3D point cloud map by taking the cross section of the 3D point cloud map as a reference, so as to realize map fusion, as shown in fig. 9.

Claims (8)

1. A multi-dimensional space-ground collaborative map building system based on vision and laser SLAM technology is characterized by comprising a ground station, an unmanned aerial vehicle and an unmanned vehicle; the ground station is used for transmitting instructions and receiving and processing map information; the unmanned aerial vehicle is used for acquiring a visual image and constructing a 3D point cloud map; the unmanned vehicle is used for acquiring plane information and constructing a 2D plane grid map.
2. The system for collaborative map creation of multi-dimensional air and ground based on vision and laser SLAM technology as claimed in claim 1, wherein said ground station comprises a display control for displaying human-computer interaction information, a control for controlling unmanned aerial vehicle and unmanned vehicle to operate, and a communication control for information interaction, transmission control and map creation instructions; the communication control is a distributed communication control and is used for remotely sending and receiving control information of the unmanned aerial vehicle end and the unmanned vehicle end.
3. The system of claim 1, wherein the ground station comprises a map fusion module for feature correspondence, position adjustment and map overlay of a multi-dimensional map.
4. The multi-dimensional space-ground collaborative mapping system based on vision and laser SLAM technology as claimed in claim 1, wherein said unmanned aerial vehicle comprises an overall physical architecture, an onboard embedded computer for communication and operation of mapping algorithm, and an image sensor for transmission of visual observation information; the airborne embedded computer comprises a wireless communication module and an embedded processor which are used for transmitting data with the ground station; the image sensor includes a binocular camera and an RGB-D camera.
5. The system for collaborative map creation of multi-dimensional air and ground based on vision and laser SLAM technology as claimed in claim 1, wherein said unmanned vehicle comprises an embedded processor for running a ground mapping algorithm, a lidar sensor for transmitting laser observation data, and a communication module; and the embedded processor is loaded with an operating system for controlling the transmission of laser radar data and map information.
6. A multi-dimensional space-ground collaborative map building method based on vision and laser SLAM technology is characterized by comprising the following steps:
(1) data acquisition: the ground station outputs a control and map building instruction to the unmanned aerial vehicle and the unmanned aerial vehicle, an image sensor carried by the unmanned aerial vehicle starts to acquire image and depth information, and a laser radar sensor carried by the unmanned aerial vehicle starts to acquire information of a plane where the unmanned aerial vehicle is located;
(2) data processing: the unmanned aerial vehicle transmits image information to an onboard embedded computer after acquiring a visual image, extracts characteristic points of the image and acquires depth values; the unmanned vehicle acquires the moving odometer information from the plane information by using the encoder;
(3) visual SLAM and laser SLAM are mapped: the unmanned aerial vehicle constructs a 3D point cloud map by using a visual SLAM mapping algorithm, and the unmanned aerial vehicle constructs a 2D plane grid map by using a laser SLAM mapping algorithm;
(4) map fusion: the ground station performs feature correspondence on the 3D point cloud map and the 2D plane grid map, and determines a vector direction and a vector length l according to corresponding feature corresponding vectors; computing a scale u and stretch k in x, y directions for a 2D planar grid map x And k y X and y are pixel point coordinates respectively; map with 3D point cloudThe cross section of the grid map is taken as a reference, the image position of the 2D grid map is adjusted to be superposed with the 3D point cloud map, and map fusion and multi-dimensional presentation are achieved.
7. The method of claim 5, wherein the image sensor uses an RGB-D camera to obtain monocular images and depth information outdoors and uses a binocular camera to obtain left and right images indoors.
8. The method of claim 5, wherein the method comprises a step of performing multi-dimensional space-ground collaborative mapping based on a visual and laser SLAM technique
Figure FDA0003651798110000021
The u is calculated according to the pixel proportion, the
Figure FDA0003651798110000022
CN202210559495.6A 2022-05-19 2022-05-19 Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology Pending CN114923477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210559495.6A CN114923477A (en) 2022-05-19 2022-05-19 Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210559495.6A CN114923477A (en) 2022-05-19 2022-05-19 Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology

Publications (1)

Publication Number Publication Date
CN114923477A true CN114923477A (en) 2022-08-19

Family

ID=82811363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210559495.6A Pending CN114923477A (en) 2022-05-19 2022-05-19 Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology

Country Status (1)

Country Link
CN (1) CN114923477A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116080423A (en) * 2023-04-03 2023-05-09 电子科技大学 Cluster unmanned vehicle energy supply system based on ROS and execution method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116080423A (en) * 2023-04-03 2023-05-09 电子科技大学 Cluster unmanned vehicle energy supply system based on ROS and execution method thereof

Similar Documents

Publication Publication Date Title
CN110446159B (en) System and method for accurate positioning and autonomous navigation of indoor unmanned aerial vehicle
CN109029417B (en) Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
US11218689B2 (en) Methods and systems for selective sensor fusion
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN111045017A (en) Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
CN108594851A (en) A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN106444837A (en) Obstacle avoiding method and obstacle avoiding system for unmanned aerial vehicle
CN102190081B (en) Vision-based fixed point robust control method for airship
CN112639882A (en) Positioning method, device and system
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN111077907A (en) Autonomous positioning method of outdoor unmanned aerial vehicle
CN113031597A (en) Autonomous obstacle avoidance method based on deep learning and stereoscopic vision
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
Chen et al. Outdoor 3d environment reconstruction based on multi-sensor fusion for remote control
CN111652276A (en) All-weather portable multifunctional bionic positioning, attitude determining and viewing system and method
Jingjing et al. Research on autonomous positioning method of UAV based on binocular vision
CN111197986A (en) Real-time early warning and obstacle avoidance method for three-dimensional path of unmanned aerial vehicle
CN113403942B (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
Gao et al. Altitude information acquisition of uav based on monocular vision and mems
Zheng et al. Integrated navigation system with monocular vision and LIDAR for indoor UAVs
Cheng et al. Monocular visual based obstacle distance estimation method for ultra-low altitude flight

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination