CN109211241B - Unmanned aerial vehicle autonomous positioning method based on visual SLAM - Google Patents

Unmanned aerial vehicle autonomous positioning method based on visual SLAM Download PDF

Info

Publication number
CN109211241B
CN109211241B CN201811047084.9A CN201811047084A CN109211241B CN 109211241 B CN109211241 B CN 109211241B CN 201811047084 A CN201811047084 A CN 201811047084A CN 109211241 B CN109211241 B CN 109211241B
Authority
CN
China
Prior art keywords
point
unmanned aerial
aerial vehicle
image
autonomous positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811047084.9A
Other languages
Chinese (zh)
Other versions
CN109211241A (en
Inventor
宗群
刘彤
窦立谦
韩天瑞
霍新友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201811047084.9A priority Critical patent/CN109211241B/en
Publication of CN109211241A publication Critical patent/CN109211241A/en
Application granted granted Critical
Publication of CN109211241B publication Critical patent/CN109211241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Abstract

The invention relates to computer vision and image processing technology, and provides an autonomous positioning method for an unmanned aerial vehicle, wherein the autonomous positioning method for the unmanned aerial vehicle based on vision SLAM is composed of a feature extraction and matching solution motion part, an image and inertia measurement unit imu fusion part and a 3D point depth estimation part; in the motion solving part, a strategy of combining a characteristic point method and a direct method is adopted, a key frame is selected, then point characteristics and line characteristics are extracted, and then errors are minimized to complete the calculation of relative poses; in the image and inertial measurement unit imu fusion part, the error is minimized to fuse the image and inertial measurement unit imu; and the last step is to estimate the depth part of the 3D point, and on the basis of matching the selected characteristic points, a triangularization method is carried out to solve the 3D position of the point, so that the depth value of the point is obtained. The invention is mainly applied to computer vision and image processing occasions.

Description

Unmanned aerial vehicle autonomous positioning method based on visual SLAM
Technical Field
The invention relates to the fields of computer vision, image processing and the like, and solves the problem of positioning of an unmanned aerial vehicle in an unknown environment without a GPS signal.
Background
Along with the improvement of social demand, unmanned aerial vehicle has more and more functional requirements and application scene, requires to possess stronger perception, decision-making and executive ability, has proposed very high requirement to work such as the structural design of monomer unmanned aerial vehicle, functional design, realizes also greatly increased of the degree of difficulty. The unmanned aerial vehicle has extremely high flexibility and autonomy, can execute tasks under the condition of no human intervention or less intervention, and helps human beings to finish dangerous or repetitive labor. Due to the diversity and complexity of the environment where the unmanned aerial vehicle is located, the unmanned aerial vehicle can sense the surrounding environment only by the capability of determining the position of the unmanned aerial vehicle, which is the premise and key for realizing complex functions and executing various tasks of the unmanned aerial vehicle. It is therefore crucial to make clear the relative position of the drone in the environment.
The autonomous positioning of the unmanned aerial vehicle is a precondition and a key for ensuring tasks such as safe operation, trajectory planning, target tracking and the like. Unmanned aerial vehicle's location mainly falls into two kinds of modes at present, and one kind is based on outside positioning system carries out the motion body location, for example: global satellite navigation systems such as GPS (global positioning system), Beidou and the like or indoor positioning systems, however, the precision of the GPS is low, and the indoor positioning systems also need to be pre-arranged with external collectors, so that the indoor positioning systems have certain limitations; the other is based on the perception of the surrounding environment by the sensor equipment carried by the unmanned aerial vehicle, and the positioning problem of the unmanned aerial vehicle is solved by processing the acquired sensor data and modeling the environment, the mode is currently called autonomous positioning, and the method can solve the positioning problem of the unmanned aerial vehicle in the absence of GPS signals or unknown environment. In the autonomous positioning of the unmanned aerial vehicle, devices such as a laser radar, a vision sensor, an inertial measurement unit And the like are generally adopted to acquire surrounding environment information And self states, And then environment modeling And positioning are realized by using a Simultaneous positioning And Mapping method (SLAM).
The autonomous positioning of the unmanned aerial vehicle is a key direction of current academic research, the application scenes of the unmanned aerial vehicle are complex and various, the estimation of the position of the unmanned aerial vehicle is one of important factors restricting the development of the unmanned aerial vehicle, only the difficulty is overcome, the unmanned aerial vehicle can develop better and longer, the SLAM is the optimal scheme for solving the problem at present and is also the current leading-edge hotspot problem, the problem of autonomous positioning of the unmanned aerial vehicle in complex environments such as multiple obstacles can be better solved by researching the technology, and the unmanned aerial vehicle positioning system has great research significance.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an autonomous positioning method for an unmanned aerial vehicle. Therefore, the unmanned aerial vehicle autonomous positioning method based on the visual SLAM comprises a feature extraction and matching solution motion part, an image and inertia measurement unit imu fusion part and a 3D point depth estimation part; in the motion solving part, a strategy of combining a characteristic point method and a direct method is adopted, a key frame is selected, then point characteristics and line characteristics are extracted, and then errors are minimized to complete the calculation of relative poses; in an image and inertia measurement unit imu fusion part, fusing the prior translation amount and rotation amount obtained by minimizing the luminosity error after feature matching and imu pre-integration with the error minimization of the translation amount and rotation amount in the current state; and the last step is to estimate the depth part of the 3D point, and on the basis of matching the selected characteristic points, a triangularization method is carried out to solve the 3D position of the point, so that the depth value of the point is obtained.
The selection strategy of the key frame is that when the number of the feature blocks on the new image frame is less than a certain threshold value, the frame is taken as a new key frame.
The motion solving part and the image and inertial measurement unit imu fusion part further specifically extract a key frame, then extract point features and line features on the key frame, and calculate the pose by using a method of minimizing photometric errors in a direct method: firstly, calculating luminosity difference values of point features and line features projected by the same space point on adjacent image frames, then calculating the difference of point feature blocks and line feature blocks on the luminosity values, fusing the difference and imu at the moment, performing pre-integration processing on imu data, calculating the error at the moment, and solving the pose based on the minimized luminosity error and the minimized imu error, wherein the specific calculation is as follows:
relative position posture T of camerak,k-1And then:
Figure BDA0001793577550000021
Figure BDA0001793577550000022
wherein, IPFor the photometric difference between two adjacent point features extracted, IlIs the luminosity error between two extracted adjacent line features, P is a pixel point, P'kk-1Is relative toConversion of the prior value, R′T kk-1For relative rotation prior values, pose Tk,k-1Solving the formula (1) by a Gauss-Newton method to obtain an initial pose;
in the formula (1), the first term is a luminosity error of the extracted point feature, the second term is a luminosity error of the extracted line feature, the latter two terms are difference values obtained by performing pre-integration processing on the read imu information and comparing the values with the values in the current state, and the obtained pose is the optimal pose in the current state by minimizing the error values.
And further optimizing the pose of the current frame:
through back projection, the initial values of the characteristic positions of all the 3D space points in the new image are preliminarily estimated by utilizing the solved relative pose transformation, namely, a mutual matching relation between the 3D space points and the projection points of the current frame can be found;
for a point feature of the extracted features, the projection model of the camera is represented as pi, and the spatial point q is (x, y, z)TThe projection-to-image coordinates are expressed as:
p=π(kq) (3)
the new 2D feature locations in the image are registered with the reference features in the keyframe r, again by minimizing the photometric error, to find the pixel location p with the smallest photometric errori', i.e.:
Figure BDA0001793577550000023
wherein A isiCarrying out affine transformation on image blocks in the key frame, wherein the optimization matching criterion is solved by a Lucas-Kanade method; for the line features in the extracted features, the position of the line segment is optimized by minimizing photometric errors, namely:
Wj′=argmin||Ik(w′j)-Ir(Aj*wj)||2 (5)
at this time, the 3D space point qi=(xi,yi,zi)TAnd a characteristic pixel point piThe corresponding relation between the position of the line segment in the space and the corresponding relation between the projection line segments on the pixel plane are determined, so that the pose T of the camera in the world coordinate system is optimized again on the basis of feature associationk,wThat is, optimization is performed based on minimizing the reprojection error, that is:
Tk,w=argmin{∑||rp(Tk,w,Xi,k)||2+∑||rl(Tk,w,Pj,k,Qj,k,lj)||2} (6)
equation (6) can be converted to solve the least squares problem and then solve the camera pose using the Gauss-Newton method.
The invention has the characteristics and beneficial effects that:
since the vision device can collect abundant environmental information, with the development of industry and the progress of scientific research, the autonomous positioning technology based on the vision sensor has become a research hotspot in the fields of computer vision and robots. With the rise of the wave of the unmanned aerial vehicle in recent years, the related research of the unmanned aerial vehicle obtains high attention, and the autonomous positioning aiming at the unmanned aerial vehicle becomes an important application research direction. The invention provides an autonomous positioning method based on visual SLAM, which has very important significance for the research of autonomous positioning of unmanned aerial vehicles. The invention has the advantages of stability, reliability, good expandability and strong stability, and is completed by an autonomous positioning method on an onboard processor. The stability and reliability of the whole system are improved. In summary, as social demands are increased, unmanned aerial vehicles have more and more functional demands and application scenes, and the unmanned aerial vehicles need to have stronger sensing, decision-making and execution capabilities, and research on autonomous positioning of the unmanned aerial vehicles becomes a key for solving the problem.
The invention mainly has the following characteristics and advantages:
(1) unmanned aerial vehicle independently fixes a position: the invention provides an unmanned aerial vehicle autonomous positioning method based on visual SLAM, which realizes autonomous positioning of an unmanned aerial vehicle and lays a foundation for subsequent tasks of unmanned aerial vehicle such as estimation planning, task allocation, obstacle avoidance and the like.
(2) The positioning method has high efficiency: the autonomous positioning method provided by the invention integrates the advantages of the feature points and the direct method, and because the calculation amount of the feature extraction and the descriptor is large and time-consuming, the method adopts the semi-direct method, mainly extracts the features of only the key frames by a mechanism of selecting the key frames, and greatly improves the efficiency. And the descriptor is not calculated, the feature matching between the images is carried out by using the feature image blocks through a direct method, and the complexity is greatly reduced by adopting the step, so that the calculation time is greatly shortened by adopting the method.
(3) The positioning precision is high: the autonomous positioning method provided by the invention extracts point features and line features from images collected in the environment, and can comprehensively utilize environment information to perform feature matching. And the imu information is added for fusion, so that the method error can be reduced, and the autonomous positioning can be carried out under the condition of strong light change or temporary image loss. Finally, the position information of the unmanned aerial vehicle is obtained through a series of optimization, so that the position information is accurate.
Description of the drawings:
fig. 1 is a diagram of an autonomous positioning system for a drone based on visual SLAM.
Fig. 2 is a flow chart of the unmanned aerial vehicle autonomous positioning method based on the visual SLAM.
FIG. 3 is a pose estimation flow chart.
FIG. 4 is a diagram of the effect of extracting point features and line features.
FIG. 5 is a diagram of an effect achieved by the autonomous positioning method.
Figure 6 unmanned aerial vehicle circular orbit of flight is from the positioning effect picture.
Detailed Description
The invention has the following functions and characteristics:
(1) the unmanned aerial vehicle is provided with a binocular camera and is used for acquiring images in front of the unmanned aerial vehicle. An imu is used for reading information of acceleration and angular speed;
(2) according to the invention, the image acquired by the camera is processed by computer vision, vision SLAM and other technologies, the key frame is screened out by a certain mechanism, and the feature matching is carried out after the feature is extracted, so that the pose of the unmanned aerial vehicle is obtained preliminarily. And then, the imu information is processed, the imu information and the imu information are fused, and the accurate position information of the unmanned aerial vehicle is finally obtained through a series of optimization.
(3) The method combining the direct method and the characteristic point method integrates the advantages of the characteristic points and the direct method, and greatly improves the efficiency.
(4) When extracting the features, the positioning method of the invention divides the features into two features of angular points and edge lines, and respectively processes the two features to a certain extent, so that the information in the environment can be utilized as much as possible. And only the selected key frames are subjected to feature extraction, and the extraction of the features needs more time, so that the method reduces the calculation complexity, greatly shortens the calculation time and ensures that the robustness and the stability of the positioning method are stronger.
The invention provides an unmanned aerial vehicle autonomous positioning method based on visual SLAM, an experimental environment depends on a distributed node framework system of ROS, and a hardware system comprises an unmanned aerial vehicle, a camera, a TX1 processor and the like.
The technical scheme is as follows:
the unmanned aerial vehicle carries a camera and a TX1 processor, and Jetson TX1 is an embedded vision computing system which is advanced at present and is also the first modular supercomputer in the world. The Jetson TX1 is an excellent development platform in the fields of computer vision, deep learning, GPU calculation, image processing and the like based on NVIDIA Maxwell architecture design containing 256 CUDA cores. The processor is utilized to run the positioning method, and the camera acquires an environment image.
The autonomous positioning method comprises a feature extraction and matching solution motion part, an image and imu fusion part and a 3D point depth estimation part. The first step of the method is to solve for the motion. In the motion solving part, a strategy of combining a characteristic point method and a direct method is adopted, a key frame is selected, then point characteristics and line characteristics are extracted, and then errors are minimized to complete the calculation of relative poses. Then, an image and imu fusion part is used for completing fusion of image information and imu information, and the vision and imu are fused by minimizing the errors of the prior translation amount and rotation amount obtained after the photometric error minimization and the imu pre-integration after the characteristic matching and the translation amount and rotation amount in the current state, so that the accuracy and precision of the positioning method are improved; and the last step is to estimate the depth part of the 3D point, wherein the depth part is mainly responsible for calculating the depth value of the space point, and the 3D position of the point is solved by carrying out a triangulation method on the selected feature point on the basis of matching the selected feature point, so that the depth value of the point is obtained. The selection strategy of the key frame is that when the number of the feature blocks on the new image frame is less than a certain threshold value, the frame is taken as a new key frame. After the processor runs the method, the accurate pose of the unmanned aerial vehicle is finally obtained.
The invention is further described below with reference to the accompanying drawings.
The overall structure of the unmanned aerial vehicle autonomous positioning system is shown in fig. 1. According to the difference of each hardware function, can divide into two levels, control layer and perception layer with all hardware on the unmanned aerial vehicle except the fuselage. The control layer comprises a Pixhawk controller, a motor and an electric regulator; the perception layer includes a ZED binocular camera and a Jetson TX1 processor.
Fig. 2 is a flow chart of an autonomous positioning method. The method mainly comprises two parts of motion estimation and depth estimation. Fig. 3 shows a detailed flow of the motion estimation section. First, key frames are extracted, and then point features and line features are extracted on the key frames. Motion estimation begins with computing the relative pose transformation of the camera based on features of adjacent image frames. In the step, extracted features in a feature point method are adopted to extract point features and line features, but descriptors are not calculated for the extracted features, and the pose is calculated by a method for minimizing photometric errors in a direct method. Firstly, the luminosity difference value of the point feature and the line feature projected by the same spatial point on the adjacent image frame is calculated, in other words, the point with known spatial position is projected to two adjacent frames, and then the difference of the point feature and the line feature block on the luminosity value is calculated. And at the moment, the vision and the imu are fused, the imu data is subjected to pre-integration processing, the error at the moment is calculated, and the pose is solved based on the minimized luminosity error and the minimized imu error.
The relative position pose T of the camera is then calculated by minimizing the photometric error of all point and line featuresk,k-1Namely:
Figure BDA0001793577550000051
Figure BDA0001793577550000052
wherein, IPFor the photometric difference between two adjacent point features extracted, IlIs the luminosity error between two extracted adjacent line features, P is a pixel point, P'kk-1Is a relative translation prior value, R'T kk-1Is a relative rotation prior value. Pose Tk,k-1And solving by a Gauss-Newton method to obtain an initial pose. Wherein δ Il(ε,lj) Is solved as shown in equation (2).
In the formula (1), the first term is a luminosity error of the extracted point feature, the second term is a luminosity error of the extracted line feature, the latter two terms are difference values obtained by performing pre-integration processing on the read imu information and comparing the values with the values in the current state, and the obtained pose is the optimal pose in the current state by minimizing the error values. The point features and the line features are extracted simultaneously, the texture features in the environment can be fully utilized, under the condition of less point features, the method can carry out pose estimation by depending on the extracted line features, and the applicability of the positioning method is improved. Under the condition that the image information is temporarily lost, the last two items in the formula (1) can be used for carrying out temporary pose estimation, so that the positioning method cannot cause serious consequences due to the loss of the position of the unmanned aerial vehicle caused by collapse caused by the temporary image loss, namely, the stability and the robustness of the positioning method can be improved to a certain extent by fusing the imu information.
The pose of the current frame camera can be obtained through the inter-frame matching in the first step, but the pose estimation method inevitably brings accumulative errors, so that drift is caused. Therefore, the pose of the current frame should be further optimized.
By back projection, the initial values of the feature positions of all the 3D space points in the new image can be preliminarily estimated by using the relative pose transformation obtained above, that is, a mutual matching relationship between the 3D space points and the projection points of the current frame can be found.
For a point feature of the extracted features, the projection model of the camera is represented as pi, and the spatial point q is (x, y, z)TThe projection-to-image coordinates may be expressed as:
p=π(kq) (3)
however, the positions of the 3D space points and the pose transformation of the camera are not accurate enough, and the positions of the characteristic image blocks projected to the current frame image have a certain drift, so that the positions can be optimized on the pixel layer. The pixel location p with the smallest photometric error is then found by registering the 2D feature location in the new image with the reference feature in the keyframe r, again by minimizing the photometric errori', i.e.:
Figure BDA0001793577550000053
wherein A isiAffine transformation is performed on image blocks in the key frame. The optimization matching criterion is solved by the Lucas-Kanade method.
For the line features in the extracted features, the position of the line segment is optimized by minimizing photometric errors, namely:
Wj′=argmin||lk(w′j)-Ir(Aj*wj)||2 (5)
at this time, the 3D space point qi=(xi,yi,zi)TAnd a characteristic pixel point piThe corresponding relation between the position of the line segment in the space and the corresponding relation between the projection line segments on the pixel plane are determined, so that the pose T of the camera in the world coordinate system can be optimized again on the basis of feature associationk,wThat is to say can be based on the maximumAnd (3) optimizing by minimizing the reprojection error, namely:
Tk,w=argmin{∑||rp(Tk,w,Xi,k)||2+∑||rl(Tk,w,Pj,k,Qj,k,lj)||2} (6)
equation (6) can be converted to solve the least squares problem and then solve the camera pose using the Gauss-Newton method.
Fig. 4 shows the results of the extracted point feature and line feature, and the left and right graphs show the results of feature extraction in the environment of texture features with obvious texture features and less obvious texture features, respectively. Fig. 4 shows that the method provided by the invention can be applied to various different environments, and can extract line features to estimate the pose in an environment with unobvious texture features.
Fig. 5 shows the result of the autonomous positioning method performed by the operation method. The method comprises the steps of obtaining environment information through a camera, extracting fast features in an image, obtaining a first key frame and a second key frame, and carrying out initial pose and depth estimation. The localization effect of the algorithm is shown in fig. 5, where the blue markers represent camera poses and the blue dashed lines represent motion trajectories.
Fig. 6 is a diagram showing the effect of the autonomous positioning method when the unmanned aerial vehicle flies along a circular trajectory with a radius of 2.5m under Gazebo simulation software. The method can estimate the real-time pose of the camera, and the flight track of the airplane estimated by the method presents a perfect circle. When the second circle is finished and returns to the starting point, the positioning error at the moment can be seen as follows through the returned topics: the error is 0.063m in the direction of the x axis, minus 0.068m in the direction of the y axis and minus 0.0001m in the direction of the z axis. Therefore, the method can realize the accurate positioning of the unmanned aerial vehicle.

Claims (2)

1. An unmanned aerial vehicle autonomous positioning method based on visual SLAM is characterized by comprising a feature extraction and matching solution motion part, an image and inertia measurement unit imu fusion part and a 3D point depth estimation part; in the motion solving part, a strategy of combining a characteristic point method and a direct method is adopted, a key frame is selected, then point characteristics and line characteristics are extracted, and then errors are minimized to complete the calculation of relative poses; in an image and inertia measurement unit imu fusion part, fusing the prior translation amount and rotation amount obtained by minimizing the luminosity error after feature matching and imu pre-integration with the error minimization of the translation amount and rotation amount in the current state; and the last step is to estimate the depth part of the 3D point, and on the basis of matching the selected characteristic points, a triangularization method is carried out to solve the 3D position of the point, so that the depth value of the point is obtained.
2. The method of claim 1, wherein the key frame is selected as a new key frame when the number of feature blocks in the new image frame is less than a threshold.
CN201811047084.9A 2018-09-08 2018-09-08 Unmanned aerial vehicle autonomous positioning method based on visual SLAM Active CN109211241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811047084.9A CN109211241B (en) 2018-09-08 2018-09-08 Unmanned aerial vehicle autonomous positioning method based on visual SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811047084.9A CN109211241B (en) 2018-09-08 2018-09-08 Unmanned aerial vehicle autonomous positioning method based on visual SLAM

Publications (2)

Publication Number Publication Date
CN109211241A CN109211241A (en) 2019-01-15
CN109211241B true CN109211241B (en) 2022-04-29

Family

ID=64987867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811047084.9A Active CN109211241B (en) 2018-09-08 2018-09-08 Unmanned aerial vehicle autonomous positioning method based on visual SLAM

Country Status (1)

Country Link
CN (1) CN109211241B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009739B (en) * 2019-01-29 2023-03-24 浙江省北大信息技术高等研究院 Method for extracting and coding motion characteristics of digital retina of mobile camera
CN111507132B (en) * 2019-01-31 2023-07-07 杭州海康机器人股份有限公司 Positioning method, device and equipment
CN110047108B (en) * 2019-03-07 2021-05-25 中国科学院深圳先进技术研究院 Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
CN110068335B (en) * 2019-04-23 2021-07-30 中国人民解放军国防科技大学 Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110207691B (en) * 2019-05-08 2021-01-15 南京航空航天大学 Multi-unmanned vehicle collaborative navigation method based on data link ranging
CN110118554B (en) * 2019-05-16 2021-07-16 达闼机器人有限公司 SLAM method, apparatus, storage medium and device based on visual inertia
CN110146099B (en) * 2019-05-31 2020-08-11 西安工程大学 Synchronous positioning and map construction method based on deep learning
CN110309883A (en) * 2019-07-01 2019-10-08 哈尔滨理工大学 A kind of unmanned plane autonomic positioning method of view-based access control model SLAM
CN111006655B (en) * 2019-10-21 2023-04-28 南京理工大学 Multi-scene autonomous navigation positioning method for airport inspection robot
CN111060948B (en) 2019-12-14 2021-10-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, helmet and computer readable storage medium
CN111047703B (en) * 2019-12-23 2023-09-26 杭州电力设备制造有限公司 User high-voltage distribution equipment identification and space reconstruction method
CN111462207A (en) * 2020-03-30 2020-07-28 重庆邮电大学 RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111812978B (en) * 2020-06-12 2023-01-24 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN111833402B (en) * 2020-06-30 2023-06-06 天津大学 Visual odometer rotary motion processing method based on pause information supplementing mechanism
CN115082516A (en) * 2021-03-15 2022-09-20 北京字跳网络技术有限公司 Target tracking method, device, equipment and medium
CN112884838B (en) * 2021-03-16 2022-11-15 重庆大学 Robot autonomous positioning method
CN115578432B (en) * 2022-09-30 2023-07-07 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN115690205B (en) * 2022-10-09 2023-12-05 北京自动化控制设备研究所 Visual relative pose measurement error estimation method based on point-line comprehensive characteristics
CN117541655B (en) * 2024-01-10 2024-03-26 上海几何伙伴智能驾驶有限公司 Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105865454A (en) * 2016-05-31 2016-08-17 西北工业大学 Unmanned aerial vehicle navigation method based on real-time online map generation
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN108253962A (en) * 2017-12-18 2018-07-06 中北智杰科技(北京)有限公司 New energy pilotless automobile localization method under a kind of low light environment
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7925049B2 (en) * 2006-08-15 2011-04-12 Sri International Stereo-based visual odometry method and system
US8555205B2 (en) * 2010-10-08 2013-10-08 Cywee Group Limited System and method utilized for human and machine interface
US9946264B2 (en) * 2016-03-22 2018-04-17 Sharp Laboratories Of America, Inc. Autonomous navigation using visual odometry

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105865454A (en) * 2016-05-31 2016-08-17 西北工业大学 Unmanned aerial vehicle navigation method based on real-time online map generation
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN108253962A (en) * 2017-12-18 2018-07-06 中北智杰科技(北京)有限公司 New energy pilotless automobile localization method under a kind of low light environment
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
移动机器人视觉里程计综述;丁文东等;《自动化学报》;20180331;第44卷(第3期);387-400 *
融合直接法与特征法的快速双目SLAM算法;张国良等;《机器人》;20171130;第39卷(第6期);879-888 *

Also Published As

Publication number Publication date
CN109211241A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN109520497B (en) Unmanned aerial vehicle autonomous positioning method based on vision and imu
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
Poddar et al. Evolution of visual odometry techniques
Strydom et al. Visual odometry: autonomous uav navigation using optic flow and stereo
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
KR20150144729A (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
Krombach et al. Feature-based visual odometry prior for real-time semi-dense stereo SLAM
WO2018182524A1 (en) Real time robust localization via visual inertial odometry
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
Andert et al. Lidar-aided camera feature tracking and visual slam for spacecraft low-orbit navigation and planetary landing
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN111736586A (en) Method and apparatus for automatically driving vehicle position for path planning
Alliez et al. Real-time multi-SLAM system for agent localization and 3D mapping in dynamic scenarios
Chen et al. Stereo visual inertial pose estimation based on feedforward-feedback loops
Lin et al. A sparse visual odometry technique based on pose adjustment with keyframe matching
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
Hoang et al. Combining edge and one-point ransac algorithm to estimate visual odometry
Abdulov et al. Visual odometry approaches to autonomous navigation for multicopter model in virtual indoor environment
Roggeman et al. Embedded vision-based localization and model predictive control for autonomous exploration
Yang et al. Visual SLAM using multiple RGB-D cameras
Lu et al. Vision-based localization methods under GPS-denied conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant